Gain Function Approximation in the Feedback Particle Filter

Size: px
Start display at page:

Download "Gain Function Approximation in the Feedback Particle Filter"

Transcription

1 Gain Function Approimation in the Feedback Particle Filter Amirhossein Taghvaei, Prashant G. Mehta Abstract This paper is concerned with numerical algorithms for gain function approimation in the feedback particle filter. The eact gain function is the solution of a Poisson equation involving a probability-weighted Laplacian. The problem is to approimate this solution using only particles sampled from the probability distribution. Two algorithms are presented: a Galerkin algorithm and a kernel-based algorithm. Both the algorithms are adapted to the samples and do not require approimation of the probability distribution as an intermediate step. The paper contains error analysis for the algorithms as well as some comparative numerical results for a non-gaussian distribution. These algorithms are also applied and illustrated for a simple nonlinear filtering eample. I. ITRODUCTIO This paper is concerned with algorithms for numerically approimating the solution of a certain linear partial differential equation (pde) that arises in the problem of nonlinear filtering. In continuous time, the filtering problem pertains to the following stochastic differential equations (sdes): dx t = a(x t )dt + db t, dz t = h(x t )dt + dw t, (a) (b) where X t R d is the (hidden) state at time t, Z t R is the observation, and {B t }, {W t } are two mutually independent standard Wiener processes taking values in R d and R, respectively. The mappings a( ) R d R d and h( ) R d R are C functions. Unless noted otherwise, all probability distributions are assumed to be absolutely continuous with respect to the Lebesgue measure, and therefore will be identified with their densities. The choice of observation being scalar-valued (Z t R) is made for notational ease. The objective of the filtering problem is to estimate the posterior distribution of X t given the time history of observations (filtration) Z t = σ(z s s t). The density of the posterior distribution is denoted by p, so that for any measurable set A R d, A p (,t) d = P[X t A Z t ]. The filter is infinite-dimensional since it defines the evolution, in the space of probability measures, of {p (,t) t }. If a( ), h( ) are linear functions, the solution is given by the finite-dimensional Kalman-Bucy filter. The article [3] surveys numerical methods to approimate the nonlinear Financial support from the SF CMMI grants and is gratefully acknowledged. A. Taghvaei and P. G. Mehta are with the Coordinated Science Laboratory and the Department of Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign (UIUC). taghvae@illinois.edu; mehtapg@illinois.edu; filter. One approach described in this survey is particle filtering. The particle filter is a simulation-based algorithm to approimate the filtering task [4]. The key step is the construction of stochastic processes {Xt i i }: The value Xt i R d is the state for the i-th particle at time t. For each time t, the empirical distribution formed by the particle population is used to approimate the posterior distribution. Recall that this is defined for any measurable set A R d by, p () (A,t) = i= [X i t A]. A common approach in particle filtering is called sequential importance sampling, where particles are generated according to their importance weight at every time step [], [4]. In our earlier papers [7], [6], [8], an alternative feedback control-based approach to the construction of a particle filter was introduced. The resulting particle filter, referred to as the feedback particle filter (FPF), is a controlled system. The dynamics of the i-th particle have the following gain feedback form, dxt i = a(xt i )dt + dbt i +K t (Xt i ) (dz t h(x t i )+ĥ t dt), () where {Bt} i are mutually independent standard Wiener processes and ĥ t = E[h(Xt i ) Z t ]. The initial condition X i is drawn from the initial density p (,) of X independent of {Bt}. i Both {Bt} i and {X i } are also assumed to be independent of X t,z t. The indicates that the sde is epressed in its Stratonovich form. The gain function K t is obtained by solving a weighted Poisson equation: For each fied time t, the function φ is the solution to a Poisson equation, BVP (p(,t) φ(,t)) = (h() ĥ)p(,t), φ(,t)p(,t) d = (zero-mean), for all R d where and denote the gradient and the divergence operators, respectively, and p denotes the conditional density of Xt i given Z t. In terms of the solution φ, the gain function is given by, K t () = φ(,t). ote that the gain function K t is vector-valued (with dimension d ) and it needs to be obtained for each fied time t. For the linear Gaussian case, the gain function is the Kalman gain. For the general nonlinear non-gaussian problem, the FPF () is eact, given an eact gain function and an eact initialization p(,) = p (,). Consequently, if the initial (3)

2 conditions {X i} i= are drawn from the initial density p (,) of X, then, as, the empirical distribution of the particle system approimates the posterior density p (,t) for each t. A numerical implementation of the FPF () requires a numerical approimation of the gain function K t and the mean ĥ t at each time-step. The mean is approimated empirically, ĥ t i= h(xt i ) = ĥ () t. The gain function approimation - the focus of this paper - is a challenging problem because of two reasons: i) Apart from the linear Gaussian case, there are no known closed-form solutions of (3); ii) The density p(,t) is not eplicitly known. At each time-step, one only has samples Xt i. These are assumed to be i.i.d sampled from p. Apart from the FPF algorithm, solution of the Poisson equation is also central to a number of other algorithms for nonlinear filtering [5], [5]. In our prior work, we have obtained results on eistence, uniqueness and regularity of the solution to the Poisson equation, based on certain weak formulation of the Poisson equation. The weak formulation led to a Galerkin numerical algorithm. The main limitation of the Galerkin is that the algorithm requires a pre-defined set of basis functions - which scales poorly with the dimension d of the state. The Galerkin algorithm can also ehibit certain numerical issues related to the Gibb s phenomena. This can in turn lead to numerical instabilities in simulating the FPF. The contributions of this paper are as follows: We present a new basis-free kernel-based algorithm for approimating the solution of the gain function. The key step is to construct a Markov matri on a certain graph defined on the space of particles {Xt i } i=. The value of the function φ for the particles, φ(xt i ), is then approimated by solving a fiedpoint problem involving the Markov matri. The fied-point problem is shown to be a contraction and the method of successive approimation applies to numerically obtain the solution. We present results on error analysis for both the Galerkin and the kernel-based method. These results are illustrated with the aid of an eample involving a multi-modal distribution. Finally, the two methods are compared for a filtering problem with a non-gaussian distribution. In the remainder of this paper, we epress the linear operator in (3) as a weighted Laplacian ρ φ = ρ (ρ φ) where additional assumptions on the density ρ appear in the main body of the paper. In recent years, this operator and the associated Markov semigroup have received considerable attention with several applications including spectral clustering, dimensionality reduction, supervised learning etc [4], []. For a mathematical treatment, see the monographs [8], []. Related specifically to control theory, there are important connections with stochastic stability of Markov operators [7], [3]. The outline of this paper is as follows: The mathematical preliminaries appear in Sec. II. The Galerkin and the kernelbased algorithms for the gain function approimation appear in Sec. III and Sec. IV, respectively. The nonlinear filtering eample appears in Sec. V. BVP II. MATHEMATICAL PRELIMIARIES The Poisson equation (3) is epressed as, ρ φ = h, φρ d = (zero-mean), where ρ is a probability density on R d, ρ φ = ρ (ρ φ) and, without loss of generality, it is assumed ĥ = hρ d =. Problem statement: Approimate the solution φ(x i ) and φ(x i ) given independent samples X i drawn from ρ. The density ρ is not eplicitly known. For the problem to be well-posed requires definition of the function spaces and additional assumptions on ρ and h enumerated net: Throughout this paper, µ is an absolutely continuous probability measure on R n with associated density ρ. L (R d,µ) is the Hilbert space of square integrable functions on R d equipped with the inner-product, < φ,ψ > = φ()ψ()dµ(). The associated norm is denoted as φ =< φ,φ >. The space H (R d,µ) is the space of square integrable functions whose derivative (defined in the weak sense) is in L (R d,µ). We use L and H to denote L (R d,µ) and H (R d,µ), respectively. For the zero-mean solution of interest, we additionally define the co-dimension subspace L = {φ L ; φ dµ = } and H = {φ H ; φ dµ = }. L is used to denote the space of functions that are bounded a.e. (Lebesgue) and the sup-norm of a function φ L is denoted as φ. The following assumptions are made throughout the paper: (i) Assumption A: The probability density function ρ is of the form ρ() = e ( µ)t Σ ( µ) V () where µ R d, Σ is a positive-definite matri and V C with V L, and its derivatives DV,D V L. (ii) Assumption A: The function h L and hdµ =. Under assumption A, the density ρ admits a spectral gap (or Poincaré inequality)[], i.e λ > such that, (4) φ ρ d λ φ ρ d, φ H. (5) Furthermore, the spectrum is known to be discrete with an ordered sequence of eigenvalues = λ < λ λ and associated eigenfunctions {e n } that form a complete orthonormal basis of L [Corollary 4..9 in []]. The trivial eigenvalue λ = with associated eigenfunction e =. On the subspace of zero-mean functions, the spectral decomposition yields: For φ L, ρ φ = λ m < e m,φ > e m. (6) m= The spectral gap condition (5) implies that λ >. Consequently, the semigroup e t ρ φ = e tλ m < e m,φ > e m (7) m=

3 8 φ () Fig.. The eact solution to the Poisson equation using the formula (8). The density ρ is the sum of two Gaussians (,σ ) and (+,σ ), and h() =. The density is depicted as the shaded curve in the background. is a strict contraction on the subspace L. It is also easy to see that µ is an invariant measure and e t ρ φ()dµ() = φ()dµ() = for all φ L. Eample : If the density ρ is Gaussian with mean µ R d and a positive-definite covariance matri Σ, the spectral gap constant (/λ ) equals the largest eigenvalue of Σ. The eigenfunctions are the Hermite functions. Given a linear function h() = H, the unique solution of the BVP (4) is given by, φ() = (ΣH) ( µ), where denotes the vector dot product in R d. In this case, K = φ = ΣH is the Kalman gain. Eample : In the scalar case (where d = ), the Poisson equation is: ρ() d d dφ (ρ() ()) = h. d Integrating twice yields the solution eplicitly, dφ d () = ρ() ρ(z)h(z)dz dy y (8) φ() = ρ(y) ρ(z)h(z)dz For the particular choice of ρ as the sum of two Gaussians (,σ ) and (+,σ ) with σ =.4 and h() =, the solution obtained using (8) is depicted in Fig.. Since dh d >, the positivity of the gain function dφ d () follows from the maimum principle for elliptic pdes [6]. III. GALERKI FOR GAI FUCTIO APPROXIMATIO Weak formulation: A function φ H is said to be a weak solution of Poisson s equation (4) if < φ, ψ >=< h,ψ >, ψ H. (9) It is shown in [] that, under Assumptions (A)-(A), there eists a unique weak solution of the Poisson equation. The Galerkin approimation involves solving (9) in a finite-dimensional subspace S H (Rd ;ρ). The solution φ is approimated as, M φ (M) () = c m ψ m (), () m= where {ψ m ()} M m= are a given set of basis functions. The finite-dimensional approimation of (9) is to choose constants {c m } M m= such that < φ (M), ψ >=< h,ψ >, ψ S, () where S = span{ψ,,ψ M } H. Denoting [A] ml =< ψ l ψ m >, b m =< h,ψ m >, and c = (c,c,,c M ) T, the finite-dimensional approimation () is epressed as a linear matri equation: Ac = b. () In a numerical implementation, the matri A and vector b are approimated as, [A] ml =< ψ l ψ m > b m =< h,ψ m > i= i= ψ l (X i ) ψ m (X i ) = [A] () ml, (3) h(x i )ψ m (X i ) = b () m. (4) The resulting solution of the matri equation (), with A = A () and b = b (), is denoted as c (), A () c () = b (). (5) Using (), we obtain the particle-based approimation of the solution: M φ (M,) () = m= c () m ψ m (). (6) In terms of this solution, the gain function is obtained as, M φ M, () = m= c () m ψ m (). Convergence analysis: The following is a summary of the approimations in the Galerkin algorithm: < φ, ψ > =< h,ψ >, ψ H Galerkin appro: < φ (M), ψ > =< h,ψ >, ψ S H Empirical approimation: i= φ (M,) (X i ) ψ(x i ) = i= h(x i )ψ(x i ), ψ S We are interested in the error analysis of these approimations as a function of both M and. ote that the approimation error φ (M) φ (M,) is random because X i are sampled randomly from the probability distribution µ. The following Proposition provides error bounds for the

4 case where the basis functions are the eigenfunctions of the Laplacian. The proof appears in the Appendi A. Proposition : Consider the empirical Galerkin approimation of the Poisson equation (9) on the space S = span{e,e,,e M } of the first M eigenfunctions. Fi M <. Then the solution for the matri equation (5) eists with probability approaching as. And there is a sequence of random variables { } such that where as a.s. φ (M,) φ λm h +, (7) Remark : In practice, the eigenfunctions of the Laplacian are not known. The basis functions are typically picked from the polynomial family, e.g., the Hermite functions. In this case, the bounds will provide qualitative assessment of the error provided the eigenfunctions associated with the first M eigenvalues are approimately in S. Quantitatively, the additional error may be bounded in terms of the projection error between the eigenfunction and the projection onto S. Eample 3: We net revisit the bimodal distribution first introduced in the Eample. Fig. depicts the empirical Galerkin approimation, with = samples, of the gain function. The basis functions are from the polynomial family - {,,, M } - and the figure depicts the gain functions with M =,3,5 modes. The (M = ) case is referred to as the constant-gain approimation where, dφ d () = h() dµ() = (const.) For the linear Gaussian case, the (const.) is the Kalman gain. The plots included in the Figure demonstrate the Gibb s phenomena as more and more modes are included in the Galerkin. Particularly concerning is the change in the sign of the gain which leads to negative values of gain for some particles contradicting the positivity property of the gain described in Eample. As discussed in the filtering eample in Sec. V, the negative values of the gain can cause numerical issues and lead to erroneous results for the filter. IV. SEMIGROUP APPROXIMATIO OF THE GAI Semigroup formulation: The semigroup formula (7) is used to construct the solution of the Poisson equation by solving the following fied-point equation for any fied positive value of t: t φ = e t ρ φ + es ρ hds. (8) A unique solution eists because e t ρ is a contraction on L. The approimation proposed in this section involves approimating the semigroup as a perturbed integral operator, for small positive values of t =. The following approimation of the semigroup appears in [4], []: T () φ() = k () (,y)φ(y)dµ(y) k (), (9) (,y)dµ(y) 8 6 φ () 4 modes 3 modes 5 modes 3 3 Fig.. Comparison of the eact solution and its empirical Galerkin approimation with M =,3,5 modes and = particles. The density is depicted as the shaded curve in the background. where k () (,y) = g () (,y) g () (,y)dµ(y) g () (,y)dµ() and g () (,y) = ep( y (4π) d 4 ) is the Gaussian kernel in R d. In terms of the perturbed integral operator, the fied-point equation (8) becomes, φ () = T () φ () + T (s) hds. () The superscript is used to distinguish the approimate (dependent) solution from the eact solution φ. As shown in the Appendi B, T () has an ergodic invariant measure µ () which approimates µ as. For any fied positive, we are interested in solutions that are zero-mean with respect to this measure. The eistence-uniqueness result for this solution is described net; the proof appears in the Appendi B. Proposition : Consider the fied-point problem () with the perturbed operator T () defined according to (9). Fi >. Then there eists a unique solution φ () such that φ () dµ () =. In a numerical implementation, the solution φ is approimated directly for the particles: Φ (,) = (φ () (X ),φ () (X ),,φ () (X )). The integral operator T () is approimated as a Markov matri whose (i, j) entry is obtained empirically as, where k (,) (,y) = T (,) k (,) (X i,x j ) i, j = l= k(,) (X i,x l ), () g (,y) l= g (,X l ) l= g (y,x l ). The resulting finite-dimensional fied-point equation is given by, Φ (,) = T (,) Φ (,) + T (s,) H () ds, ()

5 where Φ (,) = (Φ (,),Φ (,),,Φ (,) ) R is the vectorvalued solution, H () = (h(x ),h(x ),,h(x )) R, and T (,) and T (s,) are matrices defined according to (). The eistence-uniqueness result for the zero-mean solution of the finite-dimensional equation () is described net; its proof is given in the Appendi C. The zero-mean property is the finite-dimensional counterpart of the zeromean condition φ dµ = for the original problem and φ dµ = for the perturbed problem. Proposition 3: Consider the fied-point problem () with the matri T (,) defined according to (). Then, with probability, there eists a unique zero-mean solution Φ (,) R d. Once Φ (,) is available, it is straightforward to etend it to the entire domain. For R d, φ (,) () = i= k (,) (,X i )Φ (,) i + i= k (,) (,X i ) T (s,) h()ds. (3) By construction φ (,) (X i ) = Φ (,) i for i =,,. The etension is not necessary for filtering because one only needs to solve for φ at X i. The formula for this is, φ (X i ) = l j= [T (,) i j Φ (,) j (X j l k= T (,) ik Xl k )] where Xl i R is the l-th component of X i R d. The algorithm is summarized in Algorithm. For numerical purposes we use H T (s,) H () ds. Also, the fiedpoint problem () is conveniently solved using the method of successive approimation. In filtering, the initial guess is readily available from the solution at the previous time-step. Algorithm Kernel-based gain function approimation Input: {X i } i=, H = {h(x i )} i=, Output: Φ = {φ(x i )} i=, { φ(x i )} i= Calculate g i j = ep( X i X j /4) for i, j = to. Calculate k i j = Calculate T i j = g i j l g il l g jl for i, j = to. k i j l k il for i, j = to. Solve Φ = T Φ+H for Φ (Successive approimation). Calculate φ (X i ) = l j= [T i j Φ j (X j l k= T ikxl k )] for l = to d. Convergence: The following is a summary of the approimations with the kernel-based method: φ = e ρ φ + es ρ hds Kernel appro: φ () = T () φ () + T (s) hds Empirical appro: φ (,) = T (,) φ (,) + T (s,) hds We break the convergence analysis into two steps. The first step involves convergence of φ (,) to φ () as. The 8 φ () 6 4 ɛ =. ɛ =. ɛ =.4 ɛ = Fig. 3. Comparison of the eact solution and its kernel-based approimation with =.,.,.4,.8 and = particles. The density is depicted as the shaded curve in the background. second step involves convergence of φ () to φ as. The following theorem states the convergence result for the first step; a sketch of the proof appears in Appendi D. Theorem : Consider the empirical kernel approimation of the fied-point equation (8). Fi >. Then, (i) There eists a unique (zero-mean) solution φ () for the perturbed fied-point equation (). (ii) For any finite, a unique (zero-mean) solution Φ (,) for () eists with probability. For a compact set Ω R d, lim sup φ (,) () φ () () =, a.s, (4) Ω where φ (,) is the etension of the vector-valued solution Φ (,) (see (3)). The convergence analysis for step, as, is the subject of ongoing work. In this regard, it is shown in [9] that for compactly supported functions f C 3, f () T () f () = ρ f ()+O( ). Eample 4: Consider once more the bimodal distribution introduced in. Figure 3 depicts the kernel-based approimation of the gain function with = particles and range of. The kernel-based avoids the Gibbs phenomena observed with the Galerkin (see Fig. ). otably, the gain function is positive for any choice of. V. UMERICS In this section, we consider a filtering problem associated with the bimodal distribution introduced in Eample. The filtering model is given by, dx t =, dz t = X t dt +σ w dw t,

6 Particles FPF (Kernel) True state Particles FPF (Galerkin) True state Xt i Xt i t t t t density Empirical density density Empirical density (a) (b) Fig. 4. Comparison of simulation results: Trajectory of particles and posterior distribution with (a) Kernel-based approimation of the gain function, (b) Galerkin approimation of the gain function. where X t R, Z t R, W t is a standard Wiener process, the initial condition X is sampled from the bimodal distribution comprising of two Gaussians, (,σ ) and (+,σ ), and without loss of generality, Z =. As in Eample, the observation function h() = is linear. The static case is considered because the posterior is given eplicitly: p (,t) (const.)ep( σw h()z t 4σW h() t) p (). (5) The following filtering algorithms are implemented for comparison: ) Kalman filter; ) Feedback particle filter with the Galerkin approimation where S = span{,, 3, 4, 5 }; 3) Feedback particle filter with the kernel approimation. The performance metrics are as follows: ) Filter mean ˆX t ; ) Conditional probability P[ X t X < Z t]. The simulation parameters are as follows: The true initial state X =. The measurement noise parameter σ W =.3. The simulation is carried out over a finite time-horizon t [,T ] with T =.8 and a fied discretized time-step t =.. All the particle filters use = particles and have the same initialization, where particles are drawn with equal probability from one of the two Gaussians, (,σ ) or (+,σ ), where σ =.. For the kernel approimation, we use =.5. The simulation parameters are tabulated in Table I. The Kalman filter is initialized with ˆX = and Σ = Var(X ) = +σ. The latter corresponds to the variance of the prior. Figure 4 parts (a) and (b) depict the particle trajectories and the associated distributions obtained using the kernel approimation and the Galerkin approimation, respectively. The kernel-based approimation provides for a better approimation of the eact posterior. At time t during the initial transients, some of the particles with the Galerkin approimation show a divergence. This is a numerical issue due to the Gibb s phenomena that leads to erroneous negative value of the gain (see the discussion in Eamples and 3). Figure 5 depicts a comparison of the simulation results for the two metrics. For the particle filters, these are computed empirically. For applications of FPF with kernel-based approimation of the gain function for attitude estimation problem see the companion paper [9]. REFERECES [] A. Bain and D. Crisan. Fundamentals of stochastic filtering, volume 3. Springer, 9. [] D. Bakry, I. Gentil, and M. Ledou. Analysis and geometry of Markov diffusion operators, volume 348. Springer Science & Business Media, 3. [3] A. Budhiraja, L. Chen, and C. Lee. A survey of numerical methods for nonlinear filtering problems. Physica D: onlinear Phenomena, 3():7 36, 7. [4] R. R. Coifman and S. Lafon. Diffusion maps. Applied and computational harmonic analysis, ():5 3, 6. [5] F. Daum, J. Huang, and A. oushin. particle flow for nonlinear filters. In SPIE Defense, Security, and Sensing, pages ,. [6] L. C. Evans. Partial Differential Equations. American Mathematical Society, Providence, RI, 998. [7] P. W. Glynn and S. P. Meyn. A Liapunov bound for solutions of the Poisson equation. The Annals of Probability, pages 96 93, 996. [8] A. Grigoryan. Heat kernels on weighted manifolds and applications. Cont. Math, 398:93 9, 6.

7 ...8 Filter mean..8 P(X t >.5 Z t ).6 ˆX t.4.. FPF (Kernel) FPF (Galerkin) Kalman filter t (a).6 p.4. FPF (Kernel) FPF (Galerkin) Kalman filter t (b) Fig. 5. Comparison of simulation results: (a) Conditional mean of the state X t, (b) Conditional probability of X t larger than. [9] M. Hein, J. Audibert, and U. Von Luburg. From graphs to manifolds weak and strong pointwise consistency of graph Laplacians. In Learning theory, pages Springer, 5. [] M. Hein, J. Audibert, and U. Von Luburg. Graph Laplacians and their convergence on random neighborhood graphs. arxiv preprint math/685, 6. [] V. Hutson, J. Pym, and M. Cloud. Applications of functional analysis and operator theory, volume. Elsevier, 5. [] R. S. Laugesen, P. G. Mehta, S. P. Meyn, and M. Raginsky. Poisson s equation in nonlinear filtering. SIAM Journal on Control and Optimization, 53():5 55, 5. [3] S. P. Meyn and R. L. Tweedie. Markov chains and stochastic stability. Springer Science & Business Media,. [4] A. Smith, A. Doucet,. de Freitas, and. Gordon. Sequential Monte Carlo methods in practice. Springer Science & Business Media, 3. [5] T. Yang, H. Blom, and P. G. Mehta. The continuous-discrete time feedback particle filter. In American Control Conference (ACC), 4, pages IEEE, 4. [6] T. Yang, P. G. Mehta, and S. P Meyn. Feedback particle filter with mean-field coupling. In Decision and Control and European Control Conference (CDC-ECC), 5th IEEE Conference on, pages ,. [7] T. Yang, P. G. Mehta, and S. P. Meyn. A mean-field control-oriented approach to particle filtering. In American Control Conference (ACC),, pages IEEE,. [8] T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter. IEEE Trans. Automatic Control, 58():465 48, October 3. [9] C. Zhang, A. Taghvaei, and P. G. Mehta. Attitude estimation with feedback particle filter. Available online: downloads/conferences/6_cdc_attitude_estimation_fpf.pdf. A. Proof of Proposition APPEDIX Using the spectral representation (6), because h L, φ = ρ h = m= < e m,h > e m. λ m With basis functions as eigenfunctions, Therefore, φ (M) = φ φ (M) = M m= m=m+ < e m,h > e m. λ m λ m < e m,h > λm h. et, by the triangle inequality, φ φ (M,) φ φ (M) + φ (M) φ (M,). We show φ (M) φ (M,) a.s. as. Using the formulae () and (6) with basis-functions ψ m as eigenfunctions e m, φ (M) φ (M,) = c c, where denotes the Euclidean norm in R M. The vectors c and c () solve the matri equations (see () and (5)), Ac = b, A () c () = b (), where A () a.s. A, b () a.s. b by the strong law of large numbers. Consequently, because A = diag(λ,...,λ ) is invertible, c () c a.s. as. B. Proof of Proposition Denote n () () = k () (,y)dµ(y), and re-write the operator T () as, T () f () = k () (,y) n () ()n () (y) f (y) dµ() (y), where dµ () () = n () ()dµ(). Denote L (µ () ) as the space of square integrable functions with respect to µ () and as before L (µ() ) = {φ L (µ () ) φ dµ () = } is the co-dimension subspace of mean-zero functions in L (µ () ). The technical part of proving the Proposition is to show that the operator T () is a strict contraction on the subspace. Lemma : Suppose ρ, the density of the probability measure µ, satisfies Assumption A. Then, (i) µ () is a finite measure. (ii) For sufficiently small values of, the operator T () L (µ () ) L (µ () ) is a compact Markov operator with an invariant measure µ ().

8 (iii) T () L (µ() ) L (µ() ) is a strict contraction. Proof: (i) WLOG assume µ = in the Assumption A. For notational ease, denote ρ () () = g () (,y)ρ(y)dy, where recall that ρ () () is used to define the denominator of the kernel. Then c ep( T Q ρ () () c ep( T Q ), where Q = (Σ+I) and c = (π) d Q e V and c = (π) d Q e V are positive constants that depend on V. Therefore, µ () (R d ) = k () (,y)dµ()dµ(y) = g () (,y) ρ () () ρ () (y) ρ()ρ(y)ddy c g () (,y)e T Q e yt Q y ddy, which is bounded because Q = Σ Q. This proves that µ () is a finite measure. (ii) The integral operator T is a Markov operator because the kernel k () (,y) > and T () = k () (,y) n () ()n () (y) dµ() (y) = T is compact because [Theorem 7..7 in []], k() (,y) n () dµ(y) =. () k () (,y) n () ()n () (y) dµ () ()dµ () (y) <. The inequality holds because the integrand can be bounded by a Gaussian: y ep( T Q 3 T yt Q 3 y) where Q 3 = Σ Q (Q +I) is of order O(). Finally, the measure µ is an invariant measure because for all functions f, T () f () dµ () () = f (y) dµ () (y). (iii) Since µ () is an invariant measure, T () L (µ() ) L (µ() ). et, for all f, g L (µ() ), f () dµ () ()+ g(y) dµ () (y) =, k () (,y) n () ()n () (y) f ()g(y)dµ() ()dµ () (y) k () (,y) n () ()n () (y) ( f () g(y)) dµ () ()dµ () (y) where, because the kernel is everywhere positive, the equality holds iff f () = g(y) = (const.) µ a.e. Therefore, substituting g = T () f in the inequality above, T () f f + T () f T () f f, where the L norms here are with respect to the invariant measure µ. ow, for f L (µ() ), f dµ () =, and thus the equality holds iff f () = (const.) =. Therefore, T () is strictly contractive on L (µ() ). The proof of the Prop. now follows because T () is a contraction on L (µ() ). ote also that µ () (A) µ(a) for any measurable set, as. C. Proof of Proposition 3 By construction, T (,) is a stochastic matri whose entries are all positive with probability. The result follows. D. Proof sketch for Theorem Parts (i) and (ii) have already been proved as part of the Prop. and Prop. 3, respectively. The convergence result (4) leans on approimation theory for integral operators. In [Theorem in []], it is shown that if (i) T () is compact and T (,) is collectively compact; (ii) T () < ; (iii) T (,) converges to T () pointwise, i.e lim T (,) f T () f = a.s, f Then for large enough, (I T (,) ) is bounded and lim (I T(,) ) h (I T () ) h =, a.s h. In order to prove the convergence result, we consider the Banach space C (Ω) = { f C(Ω); f dµ() = } equipped with the norm. We consider T () and T (,) as linear operators on C (Ω). ote that T (,) here corresponds to the etension of the vector-valued solution to R d using (3). The proof involves verification of each of the three requirements stated above: (i) Collective compactness follows because k () is continuous [Theorem 7..6 []]. (ii) The norm condition holds because T () f () k () (,y) f (y) dµ(y) k () (,y)dµ(y) k () (,y)dµ(y) k () (,y)dµ(y) f f where the equality holds only when f is constant. For f C (Ω), this constant can only be. (iii) Pointwise convergence follows from using the LL. For a fied continuous function f, LL implies convergence for every fied R d. On a compact set Ω, pointwise convergence also implies uniform convergence.

Gain Function Approximation in the Feedback Particle Filter

Gain Function Approximation in the Feedback Particle Filter Gain Function Approimation in the Feedback Particle Filter Amirhossein Taghvaei, Prashant G. Mehta Abstract This paper is concerned with numerical algorithms for gain function approimation in the feedback

More information

c 2014 Krishna Kalyan Medarametla

c 2014 Krishna Kalyan Medarametla c 2014 Krishna Kalyan Medarametla COMPARISON OF TWO NONLINEAR FILTERING TECHNIQUES - THE EXTENDED KALMAN FILTER AND THE FEEDBACK PARTICLE FILTER BY KRISHNA KALYAN MEDARAMETLA THESIS Submitted in partial

More information

A Comparative Study of Nonlinear Filtering Techniques

A Comparative Study of Nonlinear Filtering Techniques A Comparative Study of onlinear Filtering Techniques Adam K. Tilton, Shane Ghiotto and Prashant G. Mehta Abstract In a recent work it is shown that importance sampling can be avoided in the particle filter

More information

Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD

Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD Prashant Mehta Dept. of Mechanical Science and Engineering and the Coordinated

More information

arxiv: v3 [math.oc] 21 Dec 2017

arxiv: v3 [math.oc] 21 Dec 2017 Kalman Filter and its Modern Extensions for the Continuous-time onlinear Filtering Problem arxiv:172.7241v3 [math.oc] 21 Dec 217 This paper is concerned with the filtering problem in continuous-time. Three

More information

Data assimilation as an optimal control problem and applications to UQ

Data assimilation as an optimal control problem and applications to UQ Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Comparison of Gain Function Approximation Methods in the Feedback Particle Filter

Comparison of Gain Function Approximation Methods in the Feedback Particle Filter MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Comparison of Gain Function Approximation Methods in the Feedback Particle Filter Berntorp, K. TR218-12 July 13, 218 Abstract This paper is

More information

arxiv: v1 [math.na] 26 Feb 2013

arxiv: v1 [math.na] 26 Feb 2013 1 Feedback Particle Filter Tao Yang, Prashant G. Mehta and Sean P. Meyn Abstract arxiv:1302.6563v1 [math.na] 26 Feb 2013 A new formulation of the particle filter for nonlinear filtering is presented, based

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

On compact operators

On compact operators On compact operators Alen Aleanderian * Abstract In this basic note, we consider some basic properties of compact operators. We also consider the spectrum of compact operators on Hilbert spaces. A basic

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Multi-dimensional Feedback Particle Filter for Coupled Oscillators

Multi-dimensional Feedback Particle Filter for Coupled Oscillators 3 American Control Conference (ACC) Washington, DC, USA, June 7-9, 3 Multi-dimensional Feedback Particle Filter for Coupled Oscillators Adam K. Tilton, Prashant G. Mehta and Sean P. Meyn Abstract This

More information

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term 1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes

More information

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Math The Laplacian. 1 Green s Identities, Fundamental Solution Math. 209 The Laplacian Green s Identities, Fundamental Solution Let be a bounded open set in R n, n 2, with smooth boundary. The fact that the boundary is smooth means that at each point x the external

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

c 2014 Shane T. Ghiotto

c 2014 Shane T. Ghiotto c 2014 Shane T. Ghiotto COMPARISON OF NONLINEAR FILTERING TECHNIQUES BY SHANE T. GHIOTTO THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Mechanical Engineering

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

March 13, Paper: R.R. Coifman, S. Lafon, Diffusion maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University

March 13, Paper: R.R. Coifman, S. Lafon, Diffusion maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University Kernels March 13, 2008 Paper: R.R. Coifman, S. Lafon, maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University Kernels Figure: Example Application from [LafonWWW] meaningful geometric

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

Kernels A Machine Learning Overview

Kernels A Machine Learning Overview Kernels A Machine Learning Overview S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola, Stéphane Canu, Mike Jordan and Peter

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Cores for generators of some Markov semigroups

Cores for generators of some Markov semigroups Cores for generators of some Markov semigroups Giuseppe Da Prato, Scuola Normale Superiore di Pisa, Italy and Michael Röckner Faculty of Mathematics, University of Bielefeld, Germany and Department of

More information

Weakly interacting particle systems on graphs: from dense to sparse

Weakly interacting particle systems on graphs: from dense to sparse Weakly interacting particle systems on graphs: from dense to sparse Ruoyu Wu University of Michigan (Based on joint works with Daniel Lacker and Kavita Ramanan) USC Math Finance Colloquium October 29,

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

On the bang-bang property of time optimal controls for infinite dimensional linear systems

On the bang-bang property of time optimal controls for infinite dimensional linear systems On the bang-bang property of time optimal controls for infinite dimensional linear systems Marius Tucsnak Université de Lorraine Paris, 6 janvier 2012 Notation and problem statement (I) Notation: X (the

More information

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Kolmogorov equations in Hilbert spaces IV

Kolmogorov equations in Hilbert spaces IV March 26, 2010 Other types of equations Let us consider the Burgers equation in = L 2 (0, 1) dx(t) = (AX(t) + b(x(t))dt + dw (t) X(0) = x, (19) where A = ξ 2, D(A) = 2 (0, 1) 0 1 (0, 1), b(x) = ξ 2 (x

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

Spectral theory for compact operators on Banach spaces

Spectral theory for compact operators on Banach spaces 68 Chapter 9 Spectral theory for compact operators on Banach spaces Recall that a subset S of a metric space X is precompact if its closure is compact, or equivalently every sequence contains a Cauchy

More information

EECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels

EECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space. University of Bergen General Functional Analysis Problems with solutions 6 ) Prove that is unique in any normed space. Solution of ) Let us suppose that there are 2 zeros and 2. Then = + 2 = 2 + = 2. 2)

More information

Geometry on Probability Spaces

Geometry on Probability Spaces Geometry on Probability Spaces Steve Smale Toyota Technological Institute at Chicago 427 East 60th Street, Chicago, IL 60637, USA E-mail: smale@math.berkeley.edu Ding-Xuan Zhou Department of Mathematics,

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

C. R. Acad. Sci. Paris, Ser. I

C. R. Acad. Sci. Paris, Ser. I JID:CRASS AID:5803 /FLA Doctopic: Mathematical analysis [m3g; v.90; Prn:/0/06; 3:58] P. (-3) C.R.Acad.Sci.Paris,Ser.I ( ) Contents lists available at ScienceDirect C. R. Acad. Sci. Paris, Ser. I www.sciencedirect.com

More information

FIXED POINT METHODS IN NONLINEAR ANALYSIS

FIXED POINT METHODS IN NONLINEAR ANALYSIS FIXED POINT METHODS IN NONLINEAR ANALYSIS ZACHARY SMITH Abstract. In this paper we present a selection of fixed point theorems with applications in nonlinear analysis. We begin with the Banach fixed point

More information

A NEW NONLINEAR FILTER

A NEW NONLINEAR FILTER COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed

More information

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 46 (2016), 53-64 AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE Mona Nabiei (Received 23 June, 2015) Abstract. This study first defines a new metric with

More information

The Dirichlet-to-Neumann operator

The Dirichlet-to-Neumann operator Lecture 8 The Dirichlet-to-Neumann operator The Dirichlet-to-Neumann operator plays an important role in the theory of inverse problems. In fact, from measurements of electrical currents at the surface

More information

Properties of the Scattering Transform on the Real Line

Properties of the Scattering Transform on the Real Line Journal of Mathematical Analysis and Applications 58, 3 43 (001 doi:10.1006/jmaa.000.7375, available online at http://www.idealibrary.com on Properties of the Scattering Transform on the Real Line Michael

More information

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS Josef Teichmann Abstract. Some results of ergodic theory are generalized in the setting of Banach lattices, namely Hopf s maximal ergodic inequality and the

More information

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

More information

Learning gradients: prescriptive models

Learning gradients: prescriptive models Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS

NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Optimal Sojourn Time Control within an Interval 1

Optimal Sojourn Time Control within an Interval 1 Optimal Sojourn Time Control within an Interval Jianghai Hu and Shankar Sastry Department of Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley, CA 97-77 {jianghai,sastry}@eecs.berkeley.edu

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner

More information

MULTIPLE SOLUTIONS FOR AN INDEFINITE KIRCHHOFF-TYPE EQUATION WITH SIGN-CHANGING POTENTIAL

MULTIPLE SOLUTIONS FOR AN INDEFINITE KIRCHHOFF-TYPE EQUATION WITH SIGN-CHANGING POTENTIAL Electronic Journal of Differential Equations, Vol. 2015 (2015), o. 274, pp. 1 9. ISS: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu MULTIPLE SOLUTIOS

More information

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,

More information

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

Particle Filters: Convergence Results and High Dimensions

Particle Filters: Convergence Results and High Dimensions Particle Filters: Convergence Results and High Dimensions Mark Coates mark.coates@mcgill.ca McGill University Department of Electrical and Computer Engineering Montreal, Quebec, Canada Bellairs 2012 Outline

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017 ACM/CMS 17 Linear Analysis & Applications Fall 217 Assignment 2: PDEs and Finite Element Methods Due: 7th November 217 For this assignment the following MATLAB code will be required: Introduction http://wwwmdunloporg/cms17/assignment2zip

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Overview of normed linear spaces

Overview of normed linear spaces 20 Chapter 2 Overview of normed linear spaces Starting from this chapter, we begin examining linear spaces with at least one extra structure (topology or geometry). We assume linearity; this is a natural

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

THE L 2 -HODGE THEORY AND REPRESENTATION ON R n

THE L 2 -HODGE THEORY AND REPRESENTATION ON R n THE L 2 -HODGE THEORY AND REPRESENTATION ON R n BAISHENG YAN Abstract. We present an elementary L 2 -Hodge theory on whole R n based on the minimization principle of the calculus of variations and some

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Noise is often considered as some disturbing component of the system. In particular physical situations, noise becomes

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Convergence Rate of Nonlinear Switched Systems

Convergence Rate of Nonlinear Switched Systems Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Proof: The coding of T (x) is the left shift of the coding of x. φ(t x) n = L if T n+1 (x) L

Proof: The coding of T (x) is the left shift of the coding of x. φ(t x) n = L if T n+1 (x) L Lecture 24: Defn: Topological conjugacy: Given Z + d (resp, Zd ), actions T, S a topological conjugacy from T to S is a homeomorphism φ : M N s.t. φ T = S φ i.e., φ T n = S n φ for all n Z + d (resp, Zd

More information

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Functional Analysis HW #3

Functional Analysis HW #3 Functional Analysis HW #3 Sangchul Lee October 26, 2015 1 Solutions Exercise 2.1. Let D = { f C([0, 1]) : f C([0, 1])} and define f d = f + f. Show that D is a Banach algebra and that the Gelfand transform

More information

BLOCH FUNCTIONS ON THE UNIT BALL OF AN INFINITE DIMENSIONAL HILBERT SPACE

BLOCH FUNCTIONS ON THE UNIT BALL OF AN INFINITE DIMENSIONAL HILBERT SPACE BLOCH FUNCTIONS ON THE UNIT BALL OF AN INFINITE DIMENSIONAL HILBERT SPACE OSCAR BLASCO, PABLO GALINDO, AND ALEJANDRO MIRALLES Abstract. The Bloch space has been studied on the open unit disk of C and some

More information

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University

More information

The Inverse Function Theorem 1

The Inverse Function Theorem 1 John Nachbar Washington University April 11, 2014 1 Overview. The Inverse Function Theorem 1 If a function f : R R is C 1 and if its derivative is strictly positive at some x R, then, by continuity of

More information

Rates of Convergence to Self-Similar Solutions of Burgers Equation

Rates of Convergence to Self-Similar Solutions of Burgers Equation Rates of Convergence to Self-Similar Solutions of Burgers Equation by Joel Miller Andrew Bernoff, Advisor Advisor: Committee Member: May 2 Department of Mathematics Abstract Rates of Convergence to Self-Similar

More information

BV functions in a Gelfand triple and the stochastic reflection problem on a convex set

BV functions in a Gelfand triple and the stochastic reflection problem on a convex set BV functions in a Gelfand triple and the stochastic reflection problem on a convex set Xiangchan Zhu Joint work with Prof. Michael Röckner and Rongchan Zhu Xiangchan Zhu ( Joint work with Prof. Michael

More information