Gain Function Approximation in the Feedback Particle Filter

Size: px
Start display at page:

Download "Gain Function Approximation in the Feedback Particle Filter"

Transcription

1 Gain Function Approimation in the Feedback Particle Filter Amirhossein Taghvaei, Prashant G. Mehta Abstract This paper is concerned with numerical algorithms for gain function approimation in the feedback particle filter. The eact gain function is the solution of a Poisson equation involving a probability-weighted Laplacian. The problem is to approimate this solution using only particles sampled from the probability distribution. Two algorithms are presented: a Galerkin algorithm and a graph-based algorithm. Both the algorithms are adapted to the samples and do not require approimation of the probability distribution as an intermediate step. The paper contains error analysis for the algorithms as well as some comparative numerical results for a non-gaussian distribution. These algorithms are also applied and illustrated for a simple nonlinear filtering eample. I. ITRODUCTIO This paper is concerned with algorithms for numerically approimating the solution of a certain linear partial differential equation (pde) that arises in the problem of nonlinear filtering. In continuous time, the filtering problem pertains to the following stochastic differential equations (sdes): dx t = a(x t )dt + db t, dz t = h(x t )dt + dw t, (a) (b) where X t R d is the (hidden) state at time t, Z t R is the observation, and {B t }, {W t } are two mutually independent standard Wiener processes taking values in R d and R, respectively. The mappings a( ) R d R d and h( ) R d R are C functions. Unless noted otherwise, all probability distributions are assumed to be absolutely continuous with respect to the Lebesgue measure, and therefore will be identified with their densities. The choice of observation being scalar-valued (Z t R) is made for notational ease. The objective of the filtering problem is to estimate the posterior distribution of X t given the time history of observations (filtration) Z t = σ(z s s t). The density of the posterior distribution is denoted by p, so that for any measurable set A R d, A p (,t) d = P[X t A Z t ]. The filter is infinite-dimensional since it defines the evolution, in the space of probability measures, of {p (,t) t }. If a( ), h( ) are linear functions, the solution is given by the finite-dimensional Kalman-Bucy filter. The article [3] surveys numerical methods to approimate the nonlinear Financial support from the SF CMMI grants and is gratefully acknowledged. A. Taghvaei and P. G. Mehta are with the Coordinated Science Laboratory and the Department of Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign (UIUC). taghvae@illinois.edu; mehtapg@illinois.edu; filter. One approach described in this survey is particle filtering. The particle filter is a simulation-based algorithm to approimate the filtering task [3]. The key step is the construction of stochastic processes {Xt i i }: The value Xt i R d is the state for the i-th particle at time t. For each time t, the empirical distribution formed by the particle population is used to approimate the posterior distribution. Recall that this is defined for any measurable set A R d by, p () (A,t) = i= [X i t A]. A common approach in particle filtering is called sequential importance sampling, where particles are generated according to their importance weight at every time step [], [3]. In our earlier papers [7], [6], [4], an alternative feedback control-based approach to the construction of a particle filter was introduced. The resulting particle filter, referred to as the feedback particle filter (FPF), is a controlled system. The dynamics of the i-th particle have the following gain feedback form, dxt i = a(xt i )dt + dbt i +K t (Xt i ) (dz t h(x t i )+ĥ t dt), () where {Bt} i are mutually independent standard Wiener processes and ĥ t = E[h(Xt i ) Z t ]. The initial condition X i is drawn from the initial density p (,) of X independent of {Bt}. i Both {Bt} i and {X i } are also assumed to be independent of X t,z t. The indicates that the sde is epressed in its Stratonovich form. The gain function K t is obtained by solving a weighted Poisson equation: For each fied time t, the function φ is the solution to a Poisson equation, BVP (p(,t) φ(,t)) = (h() ĥ)p(,t), φ(,t)p(,t) d = (zero-mean), for all R d where and denote the gradient and the divergence operators, respectively, and p denotes the conditional density of Xt i given Z t. In terms of the solution φ, the gain function is given by, (3) K t () = φ(,t). (4) ote that the gain function K t is vector-valued (with dimension d ) and it needs to be obtained for each fied time t. For the linear Gaussian case, the gain function is the Kalman gain. For the general nonlinear non-gaussian problem, the FPF () is eact, given an eact gain function and an eact initialization p(,) = p (,). Consequently, if the initial

2 conditions {X i} i= are drawn from the initial density p (,) of X, then, as, the empirical distribution of the particle system approimates the posterior density p (,t) for each t. A numerical implementation of the FPF () requires a numerical approimation of the gain function K t and the mean ĥ t at each time-step. The mean is approimated empirically, ĥ t i= h(xt i ) = ĥ () t. The gain function approimation - the focus of this paper - is a challenging problem because of two reasons: i) Apart from the linear Gaussian case, there are no known closed-form solutions of (3); ii) The density p(,t) is not eplicitly known. At each time-step, one only has samples X i. These are assumed to be i.i.d sampled from p. Apart from the FPF algorithm, solution of the Poisson equation is also central to a number of other algorithms for nonlinear filtering [5], [5]. In our prior work, we have obtained results on eistence, uniqueness and regularity of the solution to the Poisson equation, based on certain weak formulation of the Poisson equation. The weak formulation led to a Galerkin numerical algorithm. The main limitation of the Galerkin is that the algorithm requires a pre-defined set of basis functions - which scales poorly with the dimension d of the state. The Galerkin algorithm can also ehibit certain numerical issues related to the Gibb s phenomena. This can in turn lead to numerical instabilities in simulating the FPF. The contributions of this paper are as follows: We present a new basis-free kernel-based algorithm for approimating the solution of the gain function. The key step is to construct a Markov matri on a certain graph defined on the space of particles {X i } i=. The value of the function φ for the particles, φ(x i ), is then approimated by simply solving a fied-point problem involving the Markov matri. The fiedpoint problem is shown to be a contraction and the method of successive approimation applies to numerically obtain the solution. We present results on error analysis for both the Galerkin and the kernel-based method. These results are illustrated with the aid of an eample involving a multi-modal distribution. Finally, the two methods are compared for a filtering problem with a non-gaussian distribution. In the remainder of this paper, we epress the linear operator in (3) as a weighted Laplacian ρ φ = ρ (ρ φ) where additional assumptions on the density ρ appear in the main body of the paper. In recent years, this operator and the associated Markov semigroup have received considerable attention with several applications including spectral clustering, dimensionality reduction, supervised learning etc [4], [9]. For a mathematical treatment, see the monographs [7], []. Related specifically to control theory, there are important connections with stochastic stability of Markov operators [6], []. The outline of this paper is as follows: The mathematical preliminaries appear in Sec. II. The Galerkin and the Graphbased algorithms for the gain function approimation appear in Sec. III and Sec. IV, respectively. The nonlinear filtering eample appears in Sec. V. BVP II. MATHEMATICAL PRELIMIARIES The Poisson equation (3) is epressed as, ρ φ = h, φρ d = (zero-mean), where ρ is a probability density on R d, ρ φ = ρ (ρ φ) and, without loss of generality, it is assumed ĥ = hρ d =. Problem statement: Approimate the solution φ(x i ) and φ(x i ) given independent samples X i drawn from ρ. The density ρ is not eplicitly known. For the problem to be well-posed requires definition of the function spaces and additional assumptions on ρ and h enumerated net: Throughout this paper, µ is an absolutely continuous probability measure on R n with associated density ρ. L (R d,µ) is the Hilbert space of square integrable functions on R d equipped with the inner-product, < φ,ψ > = φ()ψ()dµ(). The associated norm is denoted as φ =< φ,φ >. The space H (R d,µ) is the space of square integrable functions whose derivative (defined in the weak sense) is in L (R d,µ). We use L and H to denote L (R d,µ) and H (R d,µ), respectively. For the zero-mean solution of interest, we additionally define the co-dimension subspace L = {φ L ; φ dµ = } and H = {φ H ; φ dµ = }. L is used to denote the space of functions that are bounded a.e. (Lebesgue) and the sup-norm of a function φ L is denoted as φ. The following assumptions are made throughout the paper: (i) Assumption A: The probability density function ρ is of the form ρ() = e ( µ)t Σ ( µ) V () where µ R d, Σ is a positive-definite matri and V C with V,DV,D V L. (ii) Assumption A: The function h L and hdµ =. Under assumption A, the density ρ admits a spectral gap (or Poincaré inequality)[], i.e λ > such that, (5) φ ρ d λ φ ρ d, φ H. (6) Furthermore, the spectrum is known to be discrete with an ordered sequence of eigenvalues = λ < λ λ and associated eigenfunctions {e n } that form a complete orthonormal basis of L [Corollary 4..9 in []]. The trivial eigenvalue λ = with associated eigenfunction e =. On the subspace of zero-mean functions, the spectral decomposition yields: For φ L, ρ φ = λ m < e m,φ > e m. m= The spectral gap condition (6) implies that λ >. Consequently, the semigroup e t ρ φ = e tλ m < e m,φ > e m (7) m=

3 8 φ () Fig.. The eact solution to the Poisson equation using the formula (8). The density ρ is the sum of two Gaussians (,σ ) and (+,σ ), and h() =. The density is depicted as the shaded curve in the background. is a strict contraction on the subspace L. It is also easy to see that µ is an invariant measure and e t ρ φ()dµ() = φ()dµ() = for all φ L. Eample : If the density ρ is Gaussian with mean µ R d and a positive-definite covariance matri Σ, the spectral gap constant (/λ ) equals the largest eigenvalue of Σ. The eigenfunctions are the Hermite functions. Given a linear function h() = H, the unique solution of the BVP (5) is given by, φ() = (ΣH) ( µ), where denotes the vector dot product in R d. In this case, K = φ = ΣH is the Kalman gain. Eample : In the scalar case (where d = ), the Poisson equation is: ρ() d d dφ (ρ() ()) = h. d Integrating twice yields the solution eplicitly, dφ d () = ρ() ρ(z)h(z)dz dy y (8) φ() = ρ(y) ρ(z)h(z)dz For the particular choice of ρ as the sum of two Gaussians (,σ ) and (+,σ ) with σ =.4 and h() =, the solution obtained using (8) is depicted in Fig.. III. GALERKI FOR GAI FUCTIO APPROXIMATIO Weak formulation: A function φ H is said to be a weak solution of Poisson s equation (5) if < φ, ψ >=< h,ψ >, ψ H. (9) It is shown in [] that, under Assumptions (A)-(A), there eists a unique weak solution of the Poisson equation. The Galerkin approimation involves solving (9) in a finite-dimensional subspace S H (Rd ;ρ). The solution φ is approimated as, M φ (M) () = c m ψ m (), () m= where {ψ m ()} M m= are a given set of basis functions. The finite-dimensional approimation of (9) is to choose constants {c m } M m= such that < φ (M), ψ >=< h,ψ >, ψ S, () where S = span{ψ,,ψ M } H. Denoting [A] ml =< ψ l ψ m >, b m =< h,ψ m >, and c = (c,c,,c M ) T, the finite-dimensional approimation () is epressed as a linear matri equation: Ac = b. () In a numerical implementation, the matri A and vector b are approimated as, [A] ml =< ψ l ψ m > b m =< h,ψ m > i= i= ψ l (X i ) ψ m (X i ) = [A] () ml, (3) h(x i )ψ m (X i ) = b () m. (4) The resulting solution of the matri equation (), with A = A () and b = b (), is denoted as c (), A () c () = b (). (5) Using (), we obtain the particle-based approimation of the solution: M φ (M,) () = m= c () m ψ m (). (6) In terms of this solution, the gain function is obtained as, M φ M, () = m= c () m ψ m (). Convergence analysis: The following is a summary of the approimations in the Galerkin algorithm: < φ, ψ > =< h,ψ >, ψ H Galerkin appro: < φ (M), ψ > =< h,ψ >, ψ S H Empirical approimation: i= φ (M,) (X i ) ψ(x i ) = i= h(x i )ψ(x i ), ψ S We are interested in the error analysis of these approimations as a function of both M and. ote that the approimation error φ (M) φ (M,) is random because X i are sampled randomly from the probability distribution µ. The following Proposition provides error bounds for the case where the basis functions are the eigenfunctions of the Laplacian. The proof appears in the Appendi.

4 8 6 φ () 4 modes 3 modes 5 modes 3 3 Fig.. Comparison of the eact solution and its empirical Galerkin approimation with M =,3,5 modes and = particles. The density is depicted as the shaded curve in the background. Proposition : Consider the empirical Galerkin approimation of the Poisson equation (9) on the space S = span{e,e,,e M } of the first M eigenfunctions. Fi M <. Then the solution for the matri equation (5) eists with probability approaching as. And there is a sequence of random variables { } such that where as a.s. φ (M,) φ λm h +, (7) Remark : In practice, the eigenfunctions of the Laplacian are not known. The basis functions are typically picked from the polynomial family, e.g., the Hermite functions. In this case, the bounds will provide qualitative assessment of the error provided the eigenfunctions associated with the first M eigenvalues are approimately in S. Quantitatively, the additional error may be bounded in terms of the projection error between the eigenfunction and the projection onto S. Eample 3: We net revisit the bimodal distribution first introduced in the Eample. Fig. depicts the empirical Galerkin approimation, with = samples, of the gain function. The basis functions are from the polynomial family - {,,, M } - and the figure depicts the gain functions with M =,3,5 modes. The (M = ) case is referred to as the constant-gain approimation where, dφ d () = h() dµ() = (const.) For the linear Gaussian case, the (const.) is the Kalman gain. The plots included in the figure demonstrate the Gibb s phenomena as more and more modes are included in the Galerkin. Particularly concerning is the change in the sign of the gain which can lead to numerical issues in simulating the filter. This is further discussed with the aid of a filtering eample in Sec. V. IV. SEMIGROUP APPROXIMATIO OF THE GAI Semigroup formulation: The semigroup formula (7) is used to construct the solution of the Poisson equation by solving the following fied-point equation for any fied positive value of t: φ = e t t ρ φ + es ρ hds. (8) A unique solution eists because e t ρ is a contraction on L. The approimation proposed in this section involves approimating the semigroup as a perturbed integral operator, for small positive values of t =. The following approimation of the semigroup appears in [4], [9]: where k () (,y) = T () φ() = k () (,y)φ(y)dµ(y) k (), (9) (,y)dµ(y) g () (,y) g () (,y)dµ(y) g () (,y)dµ() and g () (,y) = ep( y (4π) d 4 ) is the Gaussian kernel in R d. In terms of the perturbed integral operator, the fied-point equation (8) becomes, φ () = T () φ () + T (s) hds. () The superscript is used to distinguish the approimate (dependent) solution from the eact solution φ. As shown in the Appendi B, T () has an ergodic invariant measure µ () which approimates µ as. For any fied positive, we are interested in solutions that are zero-mean with respect to this measure. The eistence-uniqueness result for this solution is described net; the proof appears in the Appendi. Proposition : Consider the fied-point problem () with the perturbed operator T () defined according to (9). Fi >. Then there eists a unique solution φ () such that φ () dµ () =. In a numerical implementation, the solution φ is approimated directly for the particles: Φ (,) = (φ () (X ),φ () (X ),,φ () (X )). The integral operator T () is approimated as a Markov matri whose (i, j) entry is obtained empirically as, where k (,) (,y) = T (,) k (,) (X i,x j ) i, j = l= k(,) (X i,x l ), () g (,y) l= g (,X l ) l= g (y,x l ). The resulting finite-dimensional fied-point equation is given by, Φ (,) = T (,) Φ (,) + T (s,) H () ds, () where Φ (,) = (Φ (,),Φ (,),,Φ (,) ) R is the vectorvalued solution, H () = (h(x ),h(x ),,h(x )) R, and T (,) and T (s,) are matrices defined according to (). The eistence-uniqueness result for the zero-mean

5 solution of the finite-dimensional equation () is described net; its proof is given in the Appendi. The zero-mean property is the finite-dimensional counterpart of the zeromean condition φ dµ = for the original problem and φ dµ = for the perturbed problem. Proposition 3: Consider the fied-point problem () with the matri T (,) defined according to (). Then, with probability, there eists a unique zero-mean solution Φ (,) R d. Once Φ (,) is available, it is straightforward to etend it to the entire domain. For R d, φ (,) () = i= k (,) (,X i )Φ (,) i + i= k (,) (,X i ) T (s,) h()ds. (3) By construction φ (,) (X i ) = Φ (,) i for i =,,. The etension is not necessary for filtering because one only needs to solve for φ at X i. The formula for this is given by, φ (X i ) = l j= [T (,) i j Φ (,) j (X j l k= T (,) ik Xl k )] where X i l R is the l-component of X i R d. The algorithm is summarized in Algorithm. For numerical purposes we use H s T (s,) H () ds. Algorithm Kernel-based gain function approimation Input: {X i } i=, H = {h(x i )} i=, Output: Φ = {φ(x i )} i=, { φ(x i )} i= Calculate g i j = ep( X i X j /4) for i, j = to. Calculate k i j = Calculate T i j = g i j l g il l g jl for i, j = to. k i j l k il for i, j = to. Solve Φ = T Φ+H for Φ (Iteration). Calculate φ (X i ) = l j= [T i j Φ j (X j l k= T ikxl k )] for l = to d. Convergence: The following is a summary of the approimations with the kernel-based method: φ = e ρ φ + es ρ hds Kernel appro: φ () = T () φ () + T (s) hds Empirical appro: φ (,) = T (,) φ (,) + T (s,) hds We break the convergence analysis into two steps. The first step involves convergence of φ (,) to φ () as. The second step involves convergence of φ () to φ as. The following theorem states the convergence result for the first step; a sketch of the proof appears in Appendi D. Theorem : Consider the empirical kernel approimation of the fied-point equation (8). Fi >. Then, 8 φ () 6 4 ɛ =. ɛ =. ɛ =.4 ɛ = Fig. 3. Comparison of the eact solution and its kernel-based approimation with =.,.,.4,.8 and = particles. The density is depicted as the shaded curve in the background. (i) There eists a unique (zero-mean) solution φ () for the perturbed fied-point equation (). (ii) For any finite, a unique (zero-mean) solution Φ (,) for () eists with probability. For a compact set Ω R d, lim sup φ (,) () φ () () =, a.s, (4) Ω where φ (,) is the etension of the vector-valued solution Φ (,) (see (3)). The convergence analysis for step, as is subject of future work. In this regard, it is shown in [8] that for all functions f C 3 (Ω) where Ω R d is compact, Ω, T () f () = ρ f ()+O() Eample 4: Consider once more the bimodal distribution introduced in. Figure 3 depicts the kernel-based approimation of the gain function with = particles and range of. The kernel-based avoids the Gibbs phenomena observed with the Galerkin (see Fig. ). ote that the gain function is positive for any choice of. V. UMERICS In this section, we consider a filtering problem associated with the bimodal distribution introduced in Eample. The filtering model is given by, dx t =, (5) dz t = X t dt +σ w dw t, (6) where X t R, Z t R, {W t } is standard Wiener process, and the initial condition X is sampled from the bimodal distribution comprising of two Gaussians, (,σ ) and (+,σ ). As in Eample, the observation function h() = is linear. The

6 static case is considered because the posterior is given by the following eplicit formula: p (,t) (const.)ep(h()(z t Z ) h() t) p (). (7) The following filtering algorithms are implemented for comparison: ) Kalman filter; Galerkin with basis function {}; ) Feedback particle filter with the Galerkin approimation where S = span{,, 3, 4, 5 }; 3) Feedback particle filter with the kernel approimation. The performance metrics are as follows: ) Filter mean ˆX t ; ) Conditional probability P[ X t X < Z t]; The simulation parameters areas follows: The true initial state X =. The measurement noise parameter σ W =.3. The simulation is carried out over a finite time-horizon t [,T ] with T =.8 and a fied discretized time-step t =.. All the particle filters use = particles and have the same initialization, where particles are drawn with equal probability from one of the two Gaussians, (,σ ) or (+,σ ), where σ =.. For the Markov approimation, we use =.5. The simulation parameters are tabulated in Table I. Figure 4 depicts the particle trajectories and the associated distributions obtained using the Markov approimation and the Galerkin approimation. Figure 4 depicts a comparison of the simulation results for the two metrics. These metrics are computed empirically, e.g., the filter mean is the empirical mean of the population i= Xt i. Parameters µ σ σ w T Value TABLE I SIMULATIO PARAMETERS [] Vivian Hutson, J Pym, and M Cloud. Applications of functional analysis and operator theory, volume. Elsevier, 5. [] R. S. Laugesen, P. G. Mehta, S. P. Meyn, and M. Raginsky. Poisson s equation in nonlinear filtering. SIAM Journal on Control and Optimization, 53():5 55, 5. [] Sean P Meyn and Richard L Tweedie. Markov chains and stochastic stability. Springer Science & Business Media,. [3] Adrian Smith, Arnaud Doucet, ando de Freitas, and eil Gordon. Sequential Monte Carlo methods in practice. Springer Science & Business Media, 3. [4] T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter. IEEE Trans. Automatic Control, 58():465 48, October 3. [5] Tao Yang, Henk AP Blom, and Prashant G Mehta. The continuousdiscrete time feedback particle filter. In American Control Conference (ACC), 4, pages IEEE, 4. [6] Tao Yang, Prashant G Mehta, and Sean P Meyn. Feedback particle filter with mean-field coupling. In Decision and Control and European Control Conference (CDC-ECC), 5th IEEE Conference on, pages IEEE,. [7] Tao Yang, Prashant G Mehta, and Sean P Meyn. A mean-field controloriented approach to particle filtering. In American Control Conference (ACC),, pages IEEE,. A. Proof of Proposition APPEDIX Using the spectral representation, because h L, φ = ρ h = m= < e m,h > e m. λ m With basis functions as eigenfunctions, Therefore, φ (M) = φ φ (M) = M m= m=m+ < e m,h > e m. λ m λ m < e m,h > λm h. REFERECES [] Alan Bain and Dan Crisan. Fundamentals of stochastic filtering, volume 3. Springer, 9. [] Dominique Bakry, Ivan Gentil, and Michel Ledou. Analysis and geometry of Markov diffusion operators, volume 348. Springer Science & Business Media, 3. [3] Amarjit Budhiraja, Lingji Chen, and Chihoon Lee. A survey of numerical methods for nonlinear filtering problems. Physica D: onlinear Phenomena, 3():7 36, 7. [4] Ronald R Coifman and Stéphane Lafon. Diffusion maps. Applied and computational harmonic analysis, ():5 3, 6. [5] F. Daum, J. Huang, and A. oushin. particle flow for nonlinear filters. In SPIE Defense, Security, and Sensing, pages ,. [6] Peter W Glynn and Sean P Meyn. A liapounov bound for solutions of the poisson equation. The Annals of Probability, pages 96 93, 996. [7] Aleander GrigorâĂŹyan. Heat kernels on weighted manifolds and applications. Cont. Math, 398:93 9, 6. [8] Matthias Hein, Jean-Yves Audibert, and Ulrike Von Luburg. From graphs to manifolds weak and strong pointwise consistency of graph laplacians. In Learning theory, pages Springer, 5. [9] Matthias Hein, Jean-Yves Audibert, and Ulrike Von Luburg. Graph laplacians and their convergence on random neighborhood graphs. arxiv preprint math/685, 6. et, by the triangle inequality, φ φ (M,) φ φ (M) + φ (M) φ (M,). We show φ (M) φ (M,) a.s. as. Using the formulae () and (6) with basis-functions ψ m as eigenfunctions e m, φ (M) φ (M,) = c c, where denotes the Euclidean norm in R M. The vectors c and c () solve the matri equations (see () and (5)), Ac = b, A () c () = b (), where A () a.s. A, b () a.s. b by the strong law of large numbers. Consequently, because A = diag(λ,...,λ ) is invertible, c () c a.s. as.

7 Particles FPF (Kernel) True state Particles FPF (Galerkin) True state Xt i Xt i t t t t ρ(t,) ρ () (t,) ρ(t,) ρ () (t,) (a) (b). Filter mean P(X t >.5 Z t ) ˆX t.4.. FPF (Kernel) FPF (Galerkin) Kalman filter t (c).6 p.4. FPF (Kernel) FPF (Galerkin) Kalman filter t (d) Fig. 4. Summary of simulation results: (a) Posterior distribution and trajectory of particles for scenario I, Kernel-based approach, (b) Posterior distribution and trajectory of particles for scenario I, Galerkin approach, (c) Conditional mean of the state X t, (d) Conditional probability of X t larger than. B. Proof of Proposition Denote n () () = k () (,y)dµ(y), and re-write the operator T () as, T () f () = k () (,y) n () ()n () (y) f (y) dµ() (y), where dµ () () = n () ()dµ(). Denote L (µ () ) as the space of square integrable functions with respect to µ () and as before L (µ() ) = {φ L (µ () ) φ dµ () = } is the co-dimension subspace of mean-zero functions in L (µ () ). The technical part of proving the Proposition is to show that the operator T () is a strict contraction on the subspace. Lemma : Suppose ρ, the density of the probability measure µ, satisfies Assumption A. Then, (i) µ () is a finite measure. (ii) For sufficiently small values of, the operator T () L (µ () ) L (µ () ) is a compact Markov operator with an invariant measure µ (). (iii) T () L (µ() ) L (µ() ) is a strict contraction. Proof: (i) WLOG assume µ = in the Assumption A. For notational ease, denote ρ () () = g () (,y)ρ(y)dy, where recall that ρ () () is used to define the denominator

8 of the kernel. Then c ep( T Q ρ () () c ep( T Q ), where Q = (Σ+I) and c = (π) d Q e V and c = (π) d Q e V are positive constants that depend V. Therefore µ () (R d ) = k () (,y)dµ()dµ(y) = g () (,y) ρ () () ρ () (y) ρ()ρ(y)ddy c g () (,y)e T Q e yt Q y ddy, which is bounded because Q = Σ Q. This proves that µ () is a finite measure. (ii) The integral operator T is a Markov operator because the kernel k () (,y) > and T () = k () (,y) n () ()n () (y) dµ() (y) = T is compact because [Theorem 7..7 in []], k() (,y) n () dµ(y) =. () k () (,y) n () ()n () (y) dµ () ()dµ () (y) <. The inequality holds because the integrand can be bounded by a Gaussian: AMIR Finally, the measure µ is an invariant measure because for all functions f, T () f () dµ () () = f (y) dµ () (y). (iii) Since µ () is an invariant measure, T () L (µ() ) L (µ() ). et, for all f, g L (µ() ), f () dµ () ()+ g(y) dµ () (y) =, k () (,y) n () ()n () (y) f ()g(y)dµ() ()dµ () (y) k () (,y) n () ()n () (y) ( f () g(y)) dµ () ()dµ () (y) where, because the kernel is everywhere positive, the equality holds iff f () = g(y) = (const.) µ a.e. Therefore, substituting g = T () f in the inequality above, Therefore, T () is strictly contractive on L (µ() ). The proof of the Prop. now follows because T () is a contraction on L (µ() ). C. Proof of Proposition 3 By construction, T (,) is a stochastic matri whose entries are all positive with probability. The result follows. D. Proof sketch for Theorem Parts (i) and (ii) have already been proved as part of the Prop. and Prop. 3, respectively. The convergence result (4) leans on approimation theory for integral operators. In [Theorem in []], it is shown that if (i) T () is compact and T (,) is collectively compact; (ii) T () < ; (iii) T (,) converges to T () pointwise, i.e lim T (,) f T () f = a.s, f Then for large enough, (I T (,) ) is bounded and lim (I T(,) ) h (I T () ) h =, a.s h. For showing the convergence result, we consider the Banach space C (Ω) = { f C(Ω); f dµ() = } equipped with the norm. We consider T () and T (,) as linear operators on C (Ω). ote that T (,) here corresponds to the etension of the solution to R d using (3). The proof involves verification of each of the requirements stated above: (i) Collective compactness follows because k () is continuous [Theorem 7..6 []]. (ii) The norm condition holds because T () f () k () (,y) f (y) dµ(y) k () (,y)dµ(y) k () (,y)dµ(y) k () (,y)dµ(y) f f where the equality holds only when f is constant. This constant must be zero on C (Ω). (iii) Pointwise convergence follows from using the LL. For a fied continuous function f, LL implies convergence for every fied point R d. On a compact set Ω, pointwise convergence also implies uniform convergence. T () f f + T () f T () f f, where the L norms here are with respect to the invariant measure µ. ow, for f L (µ() ), f dµ () =, and thus the equality holds iff f () = (const.) =.

Gain Function Approximation in the Feedback Particle Filter

Gain Function Approximation in the Feedback Particle Filter Gain Function Approimation in the Feedback Particle Filter Amirhossein Taghvaei, Prashant G. Mehta Abstract This paper is concerned with numerical algorithms for gain function approimation in the feedback

More information

c 2014 Krishna Kalyan Medarametla

c 2014 Krishna Kalyan Medarametla c 2014 Krishna Kalyan Medarametla COMPARISON OF TWO NONLINEAR FILTERING TECHNIQUES - THE EXTENDED KALMAN FILTER AND THE FEEDBACK PARTICLE FILTER BY KRISHNA KALYAN MEDARAMETLA THESIS Submitted in partial

More information

A Comparative Study of Nonlinear Filtering Techniques

A Comparative Study of Nonlinear Filtering Techniques A Comparative Study of onlinear Filtering Techniques Adam K. Tilton, Shane Ghiotto and Prashant G. Mehta Abstract In a recent work it is shown that importance sampling can be avoided in the particle filter

More information

Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD

Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD Prashant Mehta Dept. of Mechanical Science and Engineering and the Coordinated

More information

arxiv: v3 [math.oc] 21 Dec 2017

arxiv: v3 [math.oc] 21 Dec 2017 Kalman Filter and its Modern Extensions for the Continuous-time onlinear Filtering Problem arxiv:172.7241v3 [math.oc] 21 Dec 217 This paper is concerned with the filtering problem in continuous-time. Three

More information

Data assimilation as an optimal control problem and applications to UQ

Data assimilation as an optimal control problem and applications to UQ Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Comparison of Gain Function Approximation Methods in the Feedback Particle Filter

Comparison of Gain Function Approximation Methods in the Feedback Particle Filter MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Comparison of Gain Function Approximation Methods in the Feedback Particle Filter Berntorp, K. TR218-12 July 13, 218 Abstract This paper is

More information

arxiv: v1 [math.na] 26 Feb 2013

arxiv: v1 [math.na] 26 Feb 2013 1 Feedback Particle Filter Tao Yang, Prashant G. Mehta and Sean P. Meyn Abstract arxiv:1302.6563v1 [math.na] 26 Feb 2013 A new formulation of the particle filter for nonlinear filtering is presented, based

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Multi-dimensional Feedback Particle Filter for Coupled Oscillators

Multi-dimensional Feedback Particle Filter for Coupled Oscillators 3 American Control Conference (ACC) Washington, DC, USA, June 7-9, 3 Multi-dimensional Feedback Particle Filter for Coupled Oscillators Adam K. Tilton, Prashant G. Mehta and Sean P. Meyn Abstract This

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

March 13, Paper: R.R. Coifman, S. Lafon, Diffusion maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University

March 13, Paper: R.R. Coifman, S. Lafon, Diffusion maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University Kernels March 13, 2008 Paper: R.R. Coifman, S. Lafon, maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University Kernels Figure: Example Application from [LafonWWW] meaningful geometric

More information

Kernels A Machine Learning Overview

Kernels A Machine Learning Overview Kernels A Machine Learning Overview S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola, Stéphane Canu, Mike Jordan and Peter

More information

On compact operators

On compact operators On compact operators Alen Aleanderian * Abstract In this basic note, we consider some basic properties of compact operators. We also consider the spectrum of compact operators on Hilbert spaces. A basic

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term 1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

c 2014 Shane T. Ghiotto

c 2014 Shane T. Ghiotto c 2014 Shane T. Ghiotto COMPARISON OF NONLINEAR FILTERING TECHNIQUES BY SHANE T. GHIOTTO THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Mechanical Engineering

More information

Weakly interacting particle systems on graphs: from dense to sparse

Weakly interacting particle systems on graphs: from dense to sparse Weakly interacting particle systems on graphs: from dense to sparse Ruoyu Wu University of Michigan (Based on joint works with Daniel Lacker and Kavita Ramanan) USC Math Finance Colloquium October 29,

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

EECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels

EECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Math The Laplacian. 1 Green s Identities, Fundamental Solution Math. 209 The Laplacian Green s Identities, Fundamental Solution Let be a bounded open set in R n, n 2, with smooth boundary. The fact that the boundary is smooth means that at each point x the external

More information

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

Geometry on Probability Spaces

Geometry on Probability Spaces Geometry on Probability Spaces Steve Smale Toyota Technological Institute at Chicago 427 East 60th Street, Chicago, IL 60637, USA E-mail: smale@math.berkeley.edu Ding-Xuan Zhou Department of Mathematics,

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner

More information

Kolmogorov equations in Hilbert spaces IV

Kolmogorov equations in Hilbert spaces IV March 26, 2010 Other types of equations Let us consider the Burgers equation in = L 2 (0, 1) dx(t) = (AX(t) + b(x(t))dt + dw (t) X(0) = x, (19) where A = ξ 2, D(A) = 2 (0, 1) 0 1 (0, 1), b(x) = ξ 2 (x

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm Gonçalo dos Reis University of Edinburgh (UK) & CMA/FCT/UNL (PT) jointly with: W. Salkeld, U. of

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Spectral theory for compact operators on Banach spaces

Spectral theory for compact operators on Banach spaces 68 Chapter 9 Spectral theory for compact operators on Banach spaces Recall that a subset S of a metric space X is precompact if its closure is compact, or equivalently every sequence contains a Cauchy

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS

NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch

More information

Mathematical Analysis Outline. William G. Faris

Mathematical Analysis Outline. William G. Faris Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA RECURSIVE MULTIPLE WIENER INTEGRAL EXPANSION FOR NONLINEAR FILTERING OF DIFFUSION PROCESSES Published in: J. A. Goldstein, N. E. Gretsky, and J. J. Uhl (editors), Stochastic Processes and Functional Analysis,

More information

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space. University of Bergen General Functional Analysis Problems with solutions 6 ) Prove that is unique in any normed space. Solution of ) Let us suppose that there are 2 zeros and 2. Then = + 2 = 2 + = 2. 2)

More information

On the bang-bang property of time optimal controls for infinite dimensional linear systems

On the bang-bang property of time optimal controls for infinite dimensional linear systems On the bang-bang property of time optimal controls for infinite dimensional linear systems Marius Tucsnak Université de Lorraine Paris, 6 janvier 2012 Notation and problem statement (I) Notation: X (the

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Cores for generators of some Markov semigroups

Cores for generators of some Markov semigroups Cores for generators of some Markov semigroups Giuseppe Da Prato, Scuola Normale Superiore di Pisa, Italy and Michael Röckner Faculty of Mathematics, University of Bielefeld, Germany and Department of

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Properties of the Scattering Transform on the Real Line

Properties of the Scattering Transform on the Real Line Journal of Mathematical Analysis and Applications 58, 3 43 (001 doi:10.1006/jmaa.000.7375, available online at http://www.idealibrary.com on Properties of the Scattering Transform on the Real Line Michael

More information

BLOCH FUNCTIONS ON THE UNIT BALL OF AN INFINITE DIMENSIONAL HILBERT SPACE

BLOCH FUNCTIONS ON THE UNIT BALL OF AN INFINITE DIMENSIONAL HILBERT SPACE BLOCH FUNCTIONS ON THE UNIT BALL OF AN INFINITE DIMENSIONAL HILBERT SPACE OSCAR BLASCO, PABLO GALINDO, AND ALEJANDRO MIRALLES Abstract. The Bloch space has been studied on the open unit disk of C and some

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 46 (2016), 53-64 AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE Mona Nabiei (Received 23 June, 2015) Abstract. This study first defines a new metric with

More information

A NEW NONLINEAR FILTER

A NEW NONLINEAR FILTER COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed

More information

BV functions in a Gelfand triple and the stochastic reflection problem on a convex set

BV functions in a Gelfand triple and the stochastic reflection problem on a convex set BV functions in a Gelfand triple and the stochastic reflection problem on a convex set Xiangchan Zhu Joint work with Prof. Michael Röckner and Rongchan Zhu Xiangchan Zhu ( Joint work with Prof. Michael

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Duality in Stability Theory: Lyapunov function and Lyapunov measure

Duality in Stability Theory: Lyapunov function and Lyapunov measure Forty-Fourth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Sept 27-29, 2006 WID.150 Duality in Stability Theory: Lyapunov function and Lyapunov measure Umesh Vaidya Abstract In this paper,

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

INVERSE FUNCTION THEOREM and SURFACES IN R n

INVERSE FUNCTION THEOREM and SURFACES IN R n INVERSE FUNCTION THEOREM and SURFACES IN R n Let f C k (U; R n ), with U R n open. Assume df(a) GL(R n ), where a U. The Inverse Function Theorem says there is an open neighborhood V U of a in R n so that

More information

Learning gradients: prescriptive models

Learning gradients: prescriptive models Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan

More information

Solutions: Problem Set 4 Math 201B, Winter 2007

Solutions: Problem Set 4 Math 201B, Winter 2007 Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

C. R. Acad. Sci. Paris, Ser. I

C. R. Acad. Sci. Paris, Ser. I JID:CRASS AID:5803 /FLA Doctopic: Mathematical analysis [m3g; v.90; Prn:/0/06; 3:58] P. (-3) C.R.Acad.Sci.Paris,Ser.I ( ) Contents lists available at ScienceDirect C. R. Acad. Sci. Paris, Ser. I www.sciencedirect.com

More information

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write 1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

Graph Metrics and Dimension Reduction

Graph Metrics and Dimension Reduction Graph Metrics and Dimension Reduction Minh Tang 1 Michael Trosset 2 1 Applied Mathematics and Statistics The Johns Hopkins University 2 Department of Statistics Indiana University, Bloomington November

More information

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

Convergence Rate of Nonlinear Switched Systems

Convergence Rate of Nonlinear Switched Systems Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the

More information

Optimal Sojourn Time Control within an Interval 1

Optimal Sojourn Time Control within an Interval 1 Optimal Sojourn Time Control within an Interval Jianghai Hu and Shankar Sastry Department of Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley, CA 97-77 {jianghai,sastry}@eecs.berkeley.edu

More information

Approximate dynamic programming for stochastic reachability

Approximate dynamic programming for stochastic reachability Approximate dynamic programming for stochastic reachability Nikolaos Kariotoglou, Sean Summers, Tyler Summers, Maryam Kamgarpour and John Lygeros Abstract In this work we illustrate how approximate dynamic

More information

ON STOCHASTIC ANALYSIS APPROACHES FOR COMPARING COMPLEX SYSTEMS

ON STOCHASTIC ANALYSIS APPROACHES FOR COMPARING COMPLEX SYSTEMS Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 2-5, 2005 ThC.5 O STOCHASTIC AAYSIS APPROACHES FOR COMPARIG COMPE SYSTEMS

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Real Analysis Math 125A, Fall 2012 Final Solutions

Real Analysis Math 125A, Fall 2012 Final Solutions Real Analysis Math 25A, Fall 202 Final Solutions. (a) Suppose that f : [0, ] R is continuous on the closed, bounded interval [0, ] and f() > 0 for every 0. Prove that the reciprocal function /f : [0, ]

More information

Wave breaking in the short-pulse equation

Wave breaking in the short-pulse equation Dynamics of PDE Vol.6 No.4 29-3 29 Wave breaking in the short-pulse equation Yue Liu Dmitry Pelinovsky and Anton Sakovich Communicated by Y. Charles Li received May 27 29. Abstract. Sufficient conditions

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

FOURIER INVERSION. an additive character in each of its arguments. The Fourier transform of f is

FOURIER INVERSION. an additive character in each of its arguments. The Fourier transform of f is FOURIER INVERSION 1. The Fourier Transform and the Inverse Fourier Transform Consider functions f, g : R n C, and consider the bilinear, symmetric function ψ : R n R n C, ψ(, ) = ep(2πi ), an additive

More information

Contribution from: Springer Verlag Berlin Heidelberg 2005 ISBN

Contribution from: Springer Verlag Berlin Heidelberg 2005 ISBN Contribution from: Mathematical Physics Studies Vol. 7 Perspectives in Analysis Essays in Honor of Lennart Carleson s 75th Birthday Michael Benedicks, Peter W. Jones, Stanislav Smirnov (Eds.) Springer

More information