Technical Report: Detailed Proofs of Bounds for Map-Aware Localization

Size: px
Start display at page:

Download "Technical Report: Detailed Proofs of Bounds for Map-Aware Localization"

Transcription

1 Technical Report: Detailed Proofs of Bounds for Map-Aware Localization Francesco Montorsi 1 and Giorgio M. Vitetta 1 1 Department of Information Engineering, University of Modena and Reggio Emilia 6th June 01 Contents 1 Introduction Derivation of the BFIM for Generic-Shape Uniform Maps.1 Modelling of the Pdf Derivation of the BFIM for 1-D Maps Derivation of the BFIM for -D Maps Derivation of the EZZB for Generic-Shape Uniform Maps Derivation of the EZZB Derivation of the EZZB for 1-D Maps Derivation of the EZZB for -D Maps Derivation of the WWB for Generic-Shape Uniform Maps Derivation of the WWB Derivation of the WWB for N-D Maps Derivation of the WWB for -D Maps francesco.montorsi@unimore.it giorgio.vitetta@unimore.it 1

2 1 Introduction In this Technical Report the proofs of the Bayesian Cramer Rao bound (BCRB, extended Ziv Zikai bound (EZZB and Weiss Weinstein bound (WWB are provided for localization systems endowed with map knowledge. For a description of the notation used in the derivations and for a discussion of the bounds, please refer to [1, Sec. II] and [1, Sec. III], respectively. Derivation of the BFIM for Generic-Shape Uniform Maps We start the derivation for the BFIM associated to uniform maps introducing some properties for the smoothing function mentioned in [1, Sec. II] which is used to model the map pdf. Then the 1-D BFIM is obtained and later extended to -D case, thus proving [1, Eq. (8]..1 Modelling of the Pdf Let s(t be a continuous and differentiable function s : R R. Assume that s(t has the following additional properties: 1 s(t 0 t R; s(tdt = 1; 3 s(t [ dt = 0; 4 s(t has support t 1 ; + 1 ] ; 5 s(0 = 1. The function s( is then a pdf function (assumptions 1 and and has an associated a-priori FI J s (assumption 3 is the regularity condition that grants the FI existence []. The assumptions 4 and 5 finally assure that s( can be used to model bounded statistical distributions i.e. maps as we defined in [1, Sec. II], eventually with some scaling and translation. A function s( satisfying the conditions above is dubbed in [1] as smoothing function. Examples of functions that satisfy those hypotheses are: 1. s(t = g ( t + 1 ; δ g ( t + 1 ; δ where g(t; δ. s(t = g ( t + 1 ; δ g ( t + 1 ; δ where g(t; δ t3 + 3t 4δ 3 f(t+δ f(t+δ+f( t+δ and f(t e 1 t u(t; 0 t < δ + 1 δ t +δ 4δ 1 t > +δ Note that in the examples above δ ( 0; 1 ] is a parameter which defines the steepness of the pdf s(t, so that lim δ 0 g (t; δ = u(t, where u(t is the unit step function (which however does not have an associated FI. In the latter example, the associated FI is easy to compute in closed form: J s (δ = 9 ln 3 4δ.

3 Finally note that s(t, a, b 1s ( t a b b is still a pdf function and its associated FI is: ( ( } ( } ln 1 t a ln s (u J s E t t b s 1 = E t = J s b u b b. Derivation of the BFIM for 1-D Maps Considering a unidimensional (1-D scenario, where the agent position to estimate is the scalar x. The support R R of a 1-D uniform map can be always represented as the union of N r disjoint segments spaced by N r 1 segments where the map pdf f(x is equal to 0. Let the N r segments of the support R be indexed by the odd numbers of the set N o = 1, 3,..., N r 1} and let c n (w n denote the centre (width of the n-th segment, with n N o. Then the map pdf f(x associated with this scenario can be expressed as f(x = 1 W R n N o s ( x cn w n = 1 W R n N o w n s(x, c n, w n (1 where W R n N w o n and s( is the smoothing function as defined in Sec..1. ( } The a-priori FI associated with f(x is J x E ln f(x x []. To simplify the expression x we consider that a for each value of x there is only one rectangle at most for which 0, ( ln f(x x and b varying x over R, all rectangles contribute to the FI integral. Thus the FI can be written as the sum of the FI contribute of each rectangle: J x 1 ( } W R x ln s(x, c n, w n = 1 W R w n E x n N o w n Js (w n = 1 W n N o R.3 Derivation of the BFIM for -D Maps n N o 1 w n ( Consider a -D smoothed uniform map f(p with support R R ; exploiting some definitions given in [1, Sec. II] and assuming that the smoothing along x and y are independent, the pdf for an arbitrary smoothed uniform -D map can be put in the form: f(p = 1 n N o h (y s ( x cx,n (y w n (y 3 m N o v (x s ( y cy,m (x h m (x (3

4 where c x,n (y (c y,m (x and w n (y (h m (x denote the centre and the length, respectively of the n-th (m-th segment and where s( } is a smoothing function as defined in Sec..1. Also note that the regularity condition E ln f(p p = 0 is easily verified thanks to the linear operators involved p and s(, which is assumed to respect that condition. The a priori BFIM, using the iterated expectation and focusing on the FI for the coordinate x, can be written as []: ( } ( }} ln f(p ln f(p [J p ] 1,1 E p = E y E x y (4 x x Then, if we ignore the smoothing for the y coordinate, that is we introduce the approximation f(p 1 ( x cx,n (y = W R(y 1 ( x cx,n (y w n (y W R (y w n (y n N o h (y s n N o h (y s where W R (y n N o h (y w n(y, we reduce the evaluation of the inner expectation to the evaluation of the FI of a 1-D map composed by N r = N h (y segments having widths w n (y} and centred around the points c x,n (y}. Thus, using the result obtained in Sec.., it is easily inferred that ( } ln f(p E x y x W R(y J s 1 W R (y w n Nh o n (y (y (5 Substituting the last result in the RHS of (4 produces [J p ] 1,1 J s 1 Y w n Nh o n (y f(ydy J s (y Y n Nh o(y dy w n (y (6 where f(y f(pdx is the pdf of y only and it has been ignored in the integral since the smoothing is assumed to affect a small portion of the integration domain. Symmetrically, ignoring the smoothing for the x coordinate, an approximated expression for the FI relative to the coordinate y is obtained. Note however that the two approximations previously mentioned, considered together, are exact only for rectangular maps. Also note that the crossterms [J p ],1 and [J p ] 1, of the BFIM are non-zero if and only if the parameters x and y are independent (like in a -D rectangle; in general, because of the smoothing this is not exactly true, but such dependence is typically weak, so that a good approximation for the BFIM is given by: J p J s diag Y n Nh o(y dy w n (y, X 4 m N o v (x dx h m (x (7

5 which coincides with [1, Eq. (8]. Note that the BCRB for a generic estimator p(z of p based on the observation vector z is given by } Λ (p(z Ep(z,p (p(z p (p(z p T J 1 and is obtained from (7 assuming a specific observation model which connects z with p and employing the relation: J = J z p + J p where J z p E z,p p [ ] } T ln f(z p p and J p is given, in this contest, by (7. 3 Derivation of the EZZB for Generic-Shape Uniform Maps In this Section we prove the EZZB expression presented in [1, Eq. (1-13] using the derivation of [3, Sec. II.B] as guideline. Then the EZZB derived is evaluated for 1-D maps and extended to the -D case, resulting in [1, Eq. (14]. 3.1 Derivation of the EZZB Consider a random vector p R R ; the Bayesian mean square error matrix (BMSE associated with a generic estimator p(z of p based on the observation vector z is given by Λ (p(z Ep(z,p (p(z p (p(z p T } If the identity [4, p. 4] is considered, then: E ν [Λ (p(z] ν = 1 0 Pr ξ ν h } h dh (8 where E ν is the estimation error for the coordinate ν, ξ ν = [p(z p] ν, p(z is some generic estimator of the r.v. p based on the observation vector z and ν x, y}. Then, trivially Pr } } } ξ ν h = Pr ξν > h + Pr ξν h ; if we consider two events with non-zero probability p = ρ 0 and p = ρ 1, we can express the inner term of (8 as: Pr ξ ν h } = Pr R ξ ν > h p = ρ 0 } f(ρ 0 dρ 0 + Pr R ξ ν h p = ρ 1 } f(ρ 1 dρ 1 (9 where f( is the a-priori pdf for the parameter to estimate p. If we let ρ 0 = ρ and ρ 1 = ρ + δ(h, where δ(h is some function of the integration variable of (8 and we constrain the integrals of (9 5

6 on a subset R(h of the map support R, such that f(ρ > 0 and f(ρ + δ(h > 0 (and thus the conditional probabilities of (9 are meaningful, then we can divide and multiply by the quantity f(ρ 0 + f(ρ 1 = f(ρ + f(ρ + δ(h > 0 and obtain the same result reported in [3, Eq. (-30]: Pr ξ ν h } [f(ρ + f(ρ + δ(h] P z min(ρ, ρ + δ(hdρ (10 R(h which holds for any δ(h satisfying the equality a T δ(h = h, where a R. Note that (10 generalizes [3, Eq. (30] to the case where the pdf f( of the parameter to estimate has a bounded support. Here a = e ν is selected so that δ(h = he ν ; this choice allows us to obtain a bound on the estimation error variances. Then substituting (10 in (8 produces [1, Eq. (13], which is reported here for reader convenience: E ν Z ν 1 [f (ρ + f (ρ + he ν ] P z min (ρ, ρ + he ν h dρdh (11 P ν where ν x, y}; note that the integration domains of h and ρ have been merged in the set P ν : P ν (h, ρ : h 0 f (ρ > 0 f (ρ + he ν > 0} R 3 and that the complete bound is written as Λ (p(z Z diag Z x, Z y } or equivalently E diag E x, E y } Z (1 3. Derivation of the EZZB for 1-D Maps Let us evaluate now (11 for a 1-D uniform map whose support R R consists of the union of N r disjoint segments and where the agent position to estimate is the scalar x. In the following such segments are indexed by the odd numbers of the set N o = 1, 3,..., N r 1} and the lower (upper limit of the segment n N o is denoted l n c n wn (u n c n + wn, where c n (w n represents the centre (length of the segment itself. Moreover, it is assumed, without any loss of generality, that c 1 < c 3 <... < c Nr 1. Note that in this context, the smoothing function which is required for the BFIM analysis (see Appendix.1 is not used, and the truly uniform map model is adopted. Eq. (11 can be adapted to 1-D scenario: Z 1 [f(τ + f(τ + h] P z min(τ, τ + hh dτdh (13 P 6

7 where ρ = τ, P = (τ, h : h 0 f(τ > 0 f(τ + h > 0} and Pmin(τ, z τ + h represents the minimum error probability of a binary detector which, on the basis of a noisy datum z and a likelihood ratio test, has to select one of the following two hypotheses H 0 : x = τ and H 1 : x = τ +h. We note that: a because of our uniform map assumption, for any (τ, h P, f(τ + f(τ + h = W R (with W R n N w o n, because of the uniformity of the considered map; b the prior-probabilities of the hypotheses H 0 and H 1 are the same and the maximum-likelihood rule is thus the optimal detection rule: Ĥ opt (z = arg max H H0,H 1 } f(z H. Assuming a Gaussian observation model z = x + n, where n N (0, σ, we obtain f(z H = H 0 = N (z; τ, σ and f(z H = H 1 = N (z; τ + h, σ. The optimal detection rule thus becomes H 0 z t + h Ĥ opt (z = H 1 z > t + h ( and its probability of error is Pmin(τ, z τ + h = 1 erfc h, so that P z min(τ, τ + h is not a function of τ and (13 further simplifies to: Z = 1 W R = σ3 W R P Q σ ( h h erfc ( u u erfc = σ3 W R σ dτ dh 0 ρi dt du (14 where u h/σ, t τ/σ and Q (t, u : u 0 f(σt > 0 f(σ(t + u > 0} can be shown to be a -D domain consisting of the union of N r triangles and N r (N r 1/ parallelograms in the plane (t, u, as exemplified by Fig. 1, which refers to the case N r = 3. In particular, the contribution to (14 from the i-th triangle Q i (t, u : l i σ t u i 0 u u i t} is σ σ σ 3 ( u i u u erfc W R Q i dt du = σ3 σ t u i ( σ u u erfc W R l 0 iσ dt du ρi u i σ u ( u u erfc dt du = σ3 W R 0 l iσ ( u (ρ i u u erfc dt du which can be written in a compact form as σ ρ W ζ(ρ i, where ρ i w i /σ, ρ W W R /σ, the function ζ(ρ is defined as ρ ( u ζ(ρ (ρ uu erfc du (15 0 7

8 and i N o. As far as the parallelograms are concerned, the i-th map support segment, for some values of h, will overlap with all previous segments 1, generating the parallelograms shown in Fig. 1; let the overlap of the i-th segment with the (i 1-th segment define the subset Q ov,i (t, u : l iσ t u i i σ σ t u u i, with i N e, 4,..., (N σ r 1}; then the integral of (14 over Q ov,i contributes the term σ ρ W ζ ov (ρ i, ρ i 1, ρ i, where ρ i x i /σ w, x i l i u i 1 and ζ ov (ρ, ρ 1, ρ is defined as ζ ov (ρ, ρ 1, ρ ρ +ρ ρ ρ +ρ 1 ρ +ρ ρ +ρ 1 +ρ ρ +ρ 1 ( u (u ρ u erfc ( u ρ u erfc du+ du+ ( u (ρ + ρ 1 + ρ uu erfc du (16 for ρ 1 > ρ (it can be shown that ζ ov (ρ, ρ 1, ρ = ζ ov (ρ, ρ, ρ 1. We then note that the contributes to (14 of the overlaps of the i-th( segment with the (i -th, u (i 3-th,..., 1st segment are always positive since the integrand u erfc 0, given that u 0. ( Nr ( Thus neglecting such regions of the plane (u, t, i.e. restricting Q to n=1 Q Nr 1 n n=1 Q ov,n, a lower bound on the EZZB Z is obtained. This choice allows us to derive a bound with a tractable analytical expression: summing together the N r contributes of (15 and the N r 1 contributes of (16, the lower bound becomes: [ ] E λ (x(z Z σ ζ (ρ n + ζ ov (ρ n, ρ n 1, ρ n+1 (17 ρ W n N o n N e where λ (x(z Ex(z,x (x(z x (x(z x T } is the BMSE for the 1-D estimator x(z. 1 It is more formally accurate to write that, decomposing f(τ as f 1 (τ + f (τ, where f 1 (τ is the pdf associated with the first segment parametrized by (c 1, w 1 and f (τ is the pdf similarly associated with the second segment (c, w, then the support of f (τ + h will overlap with the support of f 1 (τ for some values of h. In the following however, for the sake of brevity, we will refer to the overlap of the pdf supports just as the overlap of the segments. 8

9 3.3 Derivation of the EZZB for -D Maps Consider a -D uniform map f(p = f(x, y with support R R ; writing (11 for ν = x, the EZZB is written as E x Z x = 1 [f(τ, υ + f(τ + h, υ] P min(ρ, z ρ + he x h dτ dυ dh (18 P x where P x (h, τ, υ : h 0 f (τ, υ > 0 f (τ + h, υ > 0} R 3 and ρ [τ, υ] T. Note that P x is the straightforward extension of the integration domain P appearing in Sec. 3.. The minimum error probability Pmin(ρ, z ρ + he x, adopting the Gaussian observation model f(z p = N (z; p, Σ, with Σ = diag σx, σy}, is computed using the optimal detection rule Ĥ opt,x (z, whose expression is similar to the one obtained for 1-D maps: H 0 z x x + h Ĥ opt,x (z = H 1 z x > x + h where z [z x, z y ] T. Its probability ( of error is thus, unsurprisingly, the same as in the 1-D case: Pmin(ρ, z ρ + he x = 1 erfc h = P z min(τ, τ + h. Furthermore, with few regularity assumptions σ x on the map support, the integral over P x can be decomposed as the integral over (h, x and y: } 1 Z x = [f(τ, υ + f(τ + h, υ] P z min(τ, τ + hh dτ dh dυ (19 Y P x(υ where P x (υ = (h, τ : h 0 f(τ, υ > 0 f(τ + h, υ > 0} is the slice of P x R 3 at coordinate υ and the expression inside the curly brackets is recognized to be the EZZB for a 1-D map (see (13, function of the integration variable υ, whose number of segments N r (υ = N h (υ and the lengths of the 1-D segments are given by w n (υ}; the distance between the (n + 1-th and the n-th segment at ordinate υ is then w n (υ. Thus, plugging (17 into (19 it is easy to obtain: Z x = Y σx 3 n N o h (υ ζ ( wn (υ + σ x n N e h (υ ζ ov ( wn (υ σ x, w n 1(υ, w n+1(υ σ x σ x dυ Repeating the proof for the dual coordinate ν = y, the EZZB expression E Z = diag σx 3 ( wn (y + ( wn (y, w n 1(y, w n+1(y dy, σ x σ x σ x σ x Y n N o h (y ζ n N e h (y ζ ov 9

10 σ 3 y X m N o v (x ( hm (x ζ + σ y m N e v (x ζ ov ( hm (x σ y, h m 1(x, h m+1(x σ y σ y is found, where E diag E x, E y }. Note that (0 coincides with [1, Eq. (14]. dx (0 4 Derivation of the WWB for Generic-Shape Uniform Maps In this Section we prove the WWB expression presented in [1, Eq. (0] using the derivation of [5] as guideline. Then the WWB is derived for N-D maps and finally specialized for -D case, resulting in [1, Eq. (-6]. 4.1 Derivation of the WWB Consider a random vector p R R N ; the Bayesian mean square error matrix (BMSE Λ (p(z associated with a generic estimator p(z of p based on the observation vector z is given by Λ (p(z Ep(z,p (p(z p (p(z p T } The WWB on such estimation error covariance is given by [5, Eq. (7]: Λ (p(z HG 1 H T (1 where H = [h 1,..., h N ] is a matrix of test vectors h i R N and the matrix G is given by [5, Eq. (8] E z,p r (z, p; h i, s i r (z, p; h j, s j } [G] i,j = ( E z,p L s i (z; p + hi, p} E z,p L s j (z; p + hj, p} where r (z, p; h i, s i L s i (z; p + h i, p L 1 s i (z; p h i, p (3 and s i, with i = 1,..., N is a scalar optimization parameter. The inequality (1 holds for any couple h i, s i }with i = 1,..., N, such that G is well defined and invertible. The tightest bound is obtained optimizing against the h i, s i } parameters: Λ (p(z sup HG 1 H T (4 h i,s i } In order to simplify the WWB analytical evaluation, we set s i = 1, i (as suggested by Weiss and Weinstein this choice typically provides a tight bound and H = diag h 1,..., h N } to obtain a 10

11 bound only on the error variances (and not covariances. It is easy to show that with these choices, the elements off-diagonal of HG 1 H T are zero while the i-th term of the diagonal is (see (: [ HG 1 H T ] i,i = h i The WWB (4 thus may be re-written as } (E z,p L 1 (z; p + h i, p E z,p r (z, p; h i, s i } (5 Λ (p(z W (6 where W = diag W 1,..., W N } and the generic diagonal term W ν is (see (3 and (5: W ν sup h ν R } (h ν E z,p L 1 (z; p + h ν e ν, p [ ] } (7 E z,p L 1 (z; p + h ν e ν, p L 1 (z; p h ν e ν, p which coincides with [1, Eq. (0]. If E diag E 1,..., E N } is defined, with E ν = [Λ (p(z] ν, then (6 is obviously equivalent to E W 4. Derivation of the WWB for N-D Maps Consider a N-D uniform map f(p with support R R N ; the WWB term associated to the coordinate ν is (7, where ν x, y, z,...}. Adopting the Gaussian observation model f(z p = N (z; p, Σ (8 with Σ = diag σ1,..., σn }, the likelihood ratios appearing in (7 become L (z; p 1, p = N (z;p 1,Σ N (z;p,σ p 1, p R. Thus the numerator of (7 may be written as } E z,p L 1 (z; p + hν e ν, p = 1 R N = 1 exp = 1 exp P ν(h ν ( h ν 8σ ν ( h ν 8σ ν N 1 (z; ρ + hν e ν, Σ N 1 (z; ρ, Σ dρ dz dρ P ν(h ν λ ν (h ν, R (9 11

12 where P ν (h ν ρ : f (ρ > 0 f (ρ + h ν e ν > 0} is a slice of the integration domain P ν R N+1 (which is the same domain involved in the evaluation of the EZZB when N =, at coordinate h in the (h; ρ space and the function λ ν (h ν, R is defined as λ ν (h ν, R I R (pi R (p + h ν e ν dp = dρ (30 P ν(h ν The expectation appearing in the denominator of (7 can be expressed as [ ] } E z,p L 1 (z; p + hν e ν, p L 1 (z; p hν e ν, p = E z,p L (z; p + h ν e ν, p} + E z,p L (z; p h ν e ν, p} } E z,p L 1 (z; p + hν e ν, p L 1 (z; p hν e ν, p i.e., as the sum of three terms which are denoted A, B and C, respectively, in the following. It is important to note that A = E z,p L (z; p + h ν e ν, p} = 1 N (z; ρ + h ν e ν, Σdρ dz R P ν(h ν = 1 dρ = 1 λ ν (h ν, R P ν(h ν = E z,p L (z; p h ν e ν, p} = B (31 and C = 1 R = 1 exp = 1 exp N (z; ρ + h ν e ν, Σdρ dz P ν(h ν ( h ν σ ν ( h ν σ ν dρ P ν(h ν γ ν (h ν, R (3 where P ν (h ν ρ : f(ρ > 0 f(ρ + he ν > 0 f(ρ he ν > 0}, and the function γ ν (h ν, R is defined as γ ν (h ν, R I R (pi R (p + h ν e ν I R (p h ν e ν dp = dρ (33 P ν(h ν 1

13 Finally, substituting (9, (31 and (3 in (7, we obtain that the WWB for the Gaussian observation model (8 and the N-D map considered, has the equivalent form [ ( ] 1 exp h ν λ 8σν W ν = sup ν (h ν, R ( (34 h ν R λ ν (h ν, R + 1 exp h ν γ σν ν (h ν, R Note that (34 coincides with [1, Eq. (]. 4.3 Derivation of the WWB for -D Maps The functions λ ν (h ν, R and γ ν (h ν, R appearing in (34 can be simplified when N = and can be related explicitly to the functions w n ( }, w n ( }, h m ( }, h m ( } introduced in [1, Sec. II] to model a -D map f(p. Let us focus on the x coordinate (ν = x; the integrals of λ x (h x, R and γ x (h x, R can be decomposed in x and y as λ x (h x, R = dx dy = dx dy (35 where the sets and P ν(h ν y Y x P ν(h ν,y γ x (h x, R = dx dy = dx dy (36 P ν(h ν y Y x P ν(h ν,y P x (h x, y x : f (x, y > 0 f (x + h x, y > 0} R P x (h x, y x : f (x, y > 0 f (x + h x, y f (x h x, y > 0} R represent slices of P x (h x and P x (h x, respectively. These sliced sets are composed by 1-D segments and effectively represent 1-D maps as those described in Sec.. and Sec. 3.. More precisely, P x (h x, y is composed by N r = N h (y segments having widths w n (y} and centred around the points c x,n (y}, where n N o h (y. For these reasons P x (h x, y has the same form of a slice of the Q set (depicted in Fig. 1 for a specific example 1-D map and used in deriving the EZZB (see Sec. 3., obtained setting t = x, u = h x. We note that: 1. each n-th segment of P x (h x, y, with n Nh o(y, has width w n(y and describes a triangle on the (h x, x plane (similar to Q n, which contributes to the ( inner integral of (35 a term ω (w n (y, h x and to the inner integral of (36 a term ω wn(y, h x, where the function ω (w, h x is defined as ω (w, h x (w h x I [0;w] (h x (37 13

14 . the n-th segment of P x (h x, y, with n Nh o(y, does overlap for some values of h x with all previous segments, generating parallelograms on the (h x, x plane (similar to Q ov,n ; for simplicity we consider only the overlap of the n-th segment with the the (n 1-th segment, as done in the EZZB derivation. Each of such overlap contributes to the inner integral of (35 a term ω ov ( w n (y, w n 1 (y, w n+1 (y, h x, where ω ov ( w, w 1, w, h is defined as h w h [ w; w + w ] ω ov ( w, w 1, w, h w h [ w + w ; w + w 1 ] (38 w 1 + w + w h h [ w + w 1 ; w + w 1 + w ] in the case w 1 w ; in the case w 1 < w, the function ω ov ( w, w 1, w, h has the same form with w 1 and w swapped. For these reasons, for each value of the coordinate y, the inner integral of (35 can be written as ω (w n (y, h x + ω ov ( w n (y, w n 1 (y, w n+1 (y, h x (39 n Nh o(y n Nh e(y whereas the inner integral of (36 can be written as ( 1 w n(y, h x n N o h (y ω (40 To obtain the expressions of λ x (h x, R and γ x (h x, R, the outer integral on variable y has to be included: λ x (h x, R = ω (w n (y, h x dy + ω ov ( w n (y, w n 1 (y, w n+1 (y, h x dy (41 Y n Nh o(y Y n Nh e(y γ x (h x, R = Y n N o h (y ω ( 1 w n(y, h x dy (4 Note that (41 and (4 coincide with [1, Eq. (3] and [1, Eq. (4], respectively. References [1] F. Montorsi and G. M. Vitetta, On the Performance Limits of Map-Aware Localization, Submitted to IEEE Trans. on Inf. Theory. 1,,.1,.3,.3, 3, 3.1, 3.3, 4, 4.1, 4., 4.3,

15 u discarded N o = 1, 3, 5} N e =, 4} ρ 3 ρ 1 ρ 3 Q ov, ρ 3 ρ 1 Q ov,4 ρ ρ Q 1 Q 3 l 1σ u 1 σ l 3σ u 3 σ l 5σ Q 5 u 5 σ t ρ 1 ρ ρ 3 ρ 4 ρ 5 Figure 1: Representation of the set Q and of its subsets Q i } i N o and Q ov,i } i N e in the (t, u plane (N r = 3 is assumed. Note that the contribution from the discarded parallelogram is neglected in the evaluation of both the EZZB and the WWB. 15

16 [] S. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall, 1993, vol. I..1,.,.3 [3] K. Bell, Y. Steinberg, Y. Ephraim, and H. L. Van Trees, Extended Ziv-Zakai Lower Bound for Vector Parameter Estimation, IEEE Trans. on Inf. Theory, vol. 43, no., pp , Mar , 3.1, 3.1 [4] E. Cinlar, Introduction to Stochastic Processes. Prentice-Hall, [5] Z. Ben-Haim and Y. C. Eldar, A Comment on the Weiss-Weinstein Bound for Constrained Parameter Sets, IEEE Trans. on Inf. Theory, vol. 54, no. 10, pp , Oct , 4.1,

A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS. Koby Todros and Joseph Tabrikian

A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS. Koby Todros and Joseph Tabrikian 16th European Signal Processing Conference EUSIPCO 008) Lausanne Switzerland August 5-9 008 copyright by EURASIP A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS Koby Todros and Joseph

More information

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Math The Laplacian. 1 Green s Identities, Fundamental Solution Math. 209 The Laplacian Green s Identities, Fundamental Solution Let be a bounded open set in R n, n 2, with smooth boundary. The fact that the boundary is smooth means that at each point x the external

More information

THE BAYESIAN ABEL BOUND ON THE MEAN SQUARE ERROR

THE BAYESIAN ABEL BOUND ON THE MEAN SQUARE ERROR THE BAYESIAN ABEL BOUND ON THE MEAN SQUARE ERROR Alexandre Renaux, Philippe Forster, Pascal Larzabal, Christ Richmond To cite this version: Alexandre Renaux, Philippe Forster, Pascal Larzabal, Christ Richmond

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

General Bayesian Inference I

General Bayesian Inference I General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Time-Delay Estimation *

Time-Delay Estimation * OpenStax-CNX module: m1143 1 Time-Delay stimation * Don Johnson This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 1. An important signal parameter estimation

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information

Multiple Integrals and Probability Notes for Math 2605

Multiple Integrals and Probability Notes for Math 2605 Multiple Integrals and Probability Notes for Math 605 A. D. Andrew November 00. Introduction In these brief notes we introduce some ideas from probability, and relate them to multiple integration. Thus

More information

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes STOCHASTIC PROCESSES, DETECTION AND ESTIMATION 6.432 Course Notes Alan S. Willsky, Gregory W. Wornell, and Jeffrey H. Shapiro Department of Electrical Engineering and Computer Science Massachusetts Institute

More information

Mathematical Tripos Part III Michaelmas 2017 Distribution Theory & Applications, Example sheet 1 (answers) Dr A.C.L. Ashton

Mathematical Tripos Part III Michaelmas 2017 Distribution Theory & Applications, Example sheet 1 (answers) Dr A.C.L. Ashton Mathematical Tripos Part III Michaelmas 7 Distribution Theory & Applications, Eample sheet (answers) Dr A.C.L. Ashton Comments and corrections to acla@damtp.cam.ac.uk.. Construct a non-zero element of

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Math 20C Homework 2 Partial Solutions

Math 20C Homework 2 Partial Solutions Math 2C Homework 2 Partial Solutions Problem 1 (12.4.14). Calculate (j k) (j + k). Solution. The basic properties of the cross product are found in Theorem 2 of Section 12.4. From these properties, we

More information

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities Maxim Raginsky and Igal Sason ISIT 2013, Istanbul, Turkey Capacity-Achieving Channel Codes The set-up DMC

More information

ON A CHARACTERIZATION OF THE KERNEL OF THE DIRICHLET-TO-NEUMANN MAP FOR A PLANAR REGION

ON A CHARACTERIZATION OF THE KERNEL OF THE DIRICHLET-TO-NEUMANN MAP FOR A PLANAR REGION ON A CHARACTERIZATION OF THE KERNEL OF THE DIRICHLET-TO-NEUMANN MAP FOR A PLANAR REGION DAVID INGERMAN AND JAMES A. MORROW Abstract. We will show the Dirichlet-to-Neumann map Λ for the electrical conductivity

More information

STOKES THEOREM ON MANIFOLDS

STOKES THEOREM ON MANIFOLDS STOKES THEOREM ON MANIFOLDS GIDEON DRESDNER Abstract. The generalization of the Fundamental Theorem of Calculus to higher dimensions requires fairly sophisticated geometric and algebraic machinery. In

More information

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1 Math 8B Solutions Charles Martin March 6, Homework Problems. Let (X i, d i ), i n, be finitely many metric spaces. Construct a metric on the product space X = X X n. Proof. Denote points in X as x = (x,

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 While these notes are under construction, I expect there will be many typos. The main reference for this is volume 1 of Hörmander, The analysis of liner

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

A NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES

A NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES A NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES S. DEKEL, G. KERKYACHARIAN, G. KYRIAZIS, AND P. PETRUSHEV Abstract. A new proof is given of the atomic decomposition of Hardy spaces H p, 0 < p 1,

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

State Space Representation of Gaussian Processes

State Space Representation of Gaussian Processes State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

A GENERAL CLASS OF LOWER BOUNDS ON THE PROBABILITY OF ERROR IN MULTIPLE HYPOTHESIS TESTING. Tirza Routtenberg and Joseph Tabrikian

A GENERAL CLASS OF LOWER BOUNDS ON THE PROBABILITY OF ERROR IN MULTIPLE HYPOTHESIS TESTING. Tirza Routtenberg and Joseph Tabrikian A GENERAL CLASS OF LOWER BOUNDS ON THE PROBABILITY OF ERROR IN MULTIPLE HYPOTHESIS TESTING Tirza Routtenberg and Joseph Tabrikian Department of Electrical and Computer Engineering Ben-Gurion University

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

Cramer-Rao Lower Bound Computation Via the Characteristic Function

Cramer-Rao Lower Bound Computation Via the Characteristic Function Cramer-Rao ower Bound Computation Via the Characteristic Function Steven Kay, Fellow, IEEE, and Cuichun Xu Abstract The Cramer-Rao ower Bound is widely used in statistical signal processing as a benchmark

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

Measurable Choice Functions

Measurable Choice Functions (January 19, 2013) Measurable Choice Functions Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/choice functions.pdf] This note

More information

Week 7: Integration: Special Coordinates

Week 7: Integration: Special Coordinates Week 7: Integration: Special Coordinates Introduction Many problems naturally involve symmetry. One should exploit it where possible and this often means using coordinate systems other than Cartesian coordinates.

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

THIRD SEMESTER M. Sc. DEGREE (MATHEMATICS) EXAMINATION (CUSS PG 2010) MODEL QUESTION PAPER MT3C11: COMPLEX ANALYSIS

THIRD SEMESTER M. Sc. DEGREE (MATHEMATICS) EXAMINATION (CUSS PG 2010) MODEL QUESTION PAPER MT3C11: COMPLEX ANALYSIS THIRD SEMESTER M. Sc. DEGREE (MATHEMATICS) EXAMINATION (CUSS PG 2010) MODEL QUESTION PAPER MT3C11: COMPLEX ANALYSIS TIME:3 HOURS Maximum weightage:36 PART A (Short Answer Type Question 1-14) Answer All

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Chapter 3 Convolution Representation

Chapter 3 Convolution Representation Chapter 3 Convolution Representation DT Unit-Impulse Response Consider the DT SISO system: xn [ ] System yn [ ] xn [ ] = δ[ n] If the input signal is and the system has no energy at n = 0, the output yn

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Information geometry for bivariate distribution control

Information geometry for bivariate distribution control Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic

More information

On the Non-Existence of Unbiased Estimators in Constrained Estimation Problems

On the Non-Existence of Unbiased Estimators in Constrained Estimation Problems On the Non-Existence of Unbiased Estimators in Constrained Estimation Problems Anelia Somekh-Baruch, Amir Leshem and Venkatesh Saligrama 1 arxiv:1609.07415v1 [math.st] 23 Sep 2016 Abstract We address the

More information

An efficient approach to stochastic optimal control. Bert Kappen SNN Radboud University Nijmegen the Netherlands

An efficient approach to stochastic optimal control. Bert Kappen SNN Radboud University Nijmegen the Netherlands An efficient approach to stochastic optimal control Bert Kappen SNN Radboud University Nijmegen the Netherlands Bert Kappen Examples of control tasks Motor control Bert Kappen Pascal workshop, 27-29 May

More information

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours)

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours) SOLUTIONS TO THE 18.02 FINAL EXAM BJORN POONEN December 14, 2010, 9:00am-12:00 (3 hours) 1) For each of (a)-(e) below: If the statement is true, write TRUE. If the statement is false, write FALSE. (Please

More information

Elementary Analysis Math 140D Fall 2007

Elementary Analysis Math 140D Fall 2007 Elementary Analysis Math 140D Fall 2007 Bernard Russo Contents 1 Friday September 28, 2007 1 1.1 Course information............................ 1 1.2 Outline of the course........................... 1

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space. University of Bergen General Functional Analysis Problems with solutions 6 ) Prove that is unique in any normed space. Solution of ) Let us suppose that there are 2 zeros and 2. Then = + 2 = 2 + = 2. 2)

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Newtonian Mechanics. Chapter Classical space-time

Newtonian Mechanics. Chapter Classical space-time Chapter 1 Newtonian Mechanics In these notes classical mechanics will be viewed as a mathematical model for the description of physical systems consisting of a certain (generally finite) number of particles

More information

Distributionally robust optimization techniques in batch bayesian optimisation

Distributionally robust optimization techniques in batch bayesian optimisation Distributionally robust optimization techniques in batch bayesian optimisation Nikitas Rontsis June 13, 2016 1 Introduction This report is concerned with performing batch bayesian optimization of an unknown

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Review Sheet for the Final

Review Sheet for the Final Review Sheet for the Final Math 6-4 4 These problems are provided to help you study. The presence of a problem on this handout does not imply that there will be a similar problem on the test. And the absence

More information

Some Background Math Notes on Limsups, Sets, and Convexity

Some Background Math Notes on Limsups, Sets, and Convexity EE599 STOCHASTIC NETWORK OPTIMIZATION, MICHAEL J. NEELY, FALL 2008 1 Some Background Math Notes on Limsups, Sets, and Convexity I. LIMITS Let f(t) be a real valued function of time. Suppose f(t) converges

More information

Integrating Correlated Bayesian Networks Using Maximum Entropy

Integrating Correlated Bayesian Networks Using Maximum Entropy Applied Mathematical Sciences, Vol. 5, 2011, no. 48, 2361-2371 Integrating Correlated Bayesian Networks Using Maximum Entropy Kenneth D. Jarman Pacific Northwest National Laboratory PO Box 999, MSIN K7-90

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous: MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is

More information

SYMPLECTIC GEOMETRY: LECTURE 5

SYMPLECTIC GEOMETRY: LECTURE 5 SYMPLECTIC GEOMETRY: LECTURE 5 LIAT KESSLER Let (M, ω) be a connected compact symplectic manifold, T a torus, T M M a Hamiltonian action of T on M, and Φ: M t the assoaciated moment map. Theorem 0.1 (The

More information

Lecture 7 - Separable Equations

Lecture 7 - Separable Equations Lecture 7 - Separable Equations Separable equations is a very special type of differential equations where you can separate the terms involving only y on one side of the equation and terms involving only

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

56 4 Integration against rough paths

56 4 Integration against rough paths 56 4 Integration against rough paths comes to the definition of a rough integral we typically take W = LV, W ; although other choices can be useful see e.g. remark 4.11. In the context of rough differential

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Continuity. Matt Rosenzweig

Continuity. Matt Rosenzweig Continuity Matt Rosenzweig Contents 1 Continuity 1 1.1 Rudin Chapter 4 Exercises........................................ 1 1.1.1 Exercise 1............................................. 1 1.1.2 Exercise

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Introduction: The Perceptron

Introduction: The Perceptron Introduction: The Perceptron Haim Sompolinsy, MIT October 4, 203 Perceptron Architecture The simplest type of perceptron has a single layer of weights connecting the inputs and output. Formally, the perceptron

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

SOLUTIONS OF SELECTED PROBLEMS

SOLUTIONS OF SELECTED PROBLEMS SOLUTIONS OF SELECTED PROBLEMS Problem 36, p. 63 If µ(e n < and χ En f in L, then f is a.e. equal to a characteristic function of a measurable set. Solution: By Corollary.3, there esists a subsequence

More information

much more on minimax (order bounds) cf. lecture by Iain Johnstone

much more on minimax (order bounds) cf. lecture by Iain Johnstone much more on minimax (order bounds) cf. lecture by Iain Johnstone http://www-stat.stanford.edu/~imj/wald/wald1web.pdf today s lecture parametric estimation, Fisher information, Cramer-Rao lower bound:

More information

Homogenization with stochastic differential equations

Homogenization with stochastic differential equations Homogenization with stochastic differential equations Scott Hottovy shottovy@math.arizona.edu University of Arizona Program in Applied Mathematics October 12, 2011 Modeling with SDE Use SDE to model system

More information

Math 127C, Spring 2006 Final Exam Solutions. x 2 ), g(y 1, y 2 ) = ( y 1 y 2, y1 2 + y2) 2. (g f) (0) = g (f(0))f (0).

Math 127C, Spring 2006 Final Exam Solutions. x 2 ), g(y 1, y 2 ) = ( y 1 y 2, y1 2 + y2) 2. (g f) (0) = g (f(0))f (0). Math 27C, Spring 26 Final Exam Solutions. Define f : R 2 R 2 and g : R 2 R 2 by f(x, x 2 (sin x 2 x, e x x 2, g(y, y 2 ( y y 2, y 2 + y2 2. Use the chain rule to compute the matrix of (g f (,. By the chain

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i Real Analysis Problem 1. If F : R R is a monotone function, show that F T V ([a,b]) = F (b) F (a) for any interval [a, b], and that F has bounded variation on R if and only if it is bounded. Here F T V

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1) 1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

CROSS-VALIDATION OF CONTROLLED DYNAMIC MODELS: BAYESIAN APPROACH

CROSS-VALIDATION OF CONTROLLED DYNAMIC MODELS: BAYESIAN APPROACH CROSS-VALIDATION OF CONTROLLED DYNAMIC MODELS: BAYESIAN APPROACH Miroslav Kárný, Petr Nedoma,Václav Šmídl Institute of Information Theory and Automation AV ČR, P.O.Box 18, 180 00 Praha 8, Czech Republic

More information

The incompressible Navier-Stokes equations in vacuum

The incompressible Navier-Stokes equations in vacuum The incompressible, Université Paris-Est Créteil Piotr Bogus law Mucha, Warsaw University Journées Jeunes EDPistes 218, Institut Elie Cartan, Université de Lorraine March 23th, 218 Incompressible Navier-Stokes

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Higher-order ordinary differential equations

Higher-order ordinary differential equations Higher-order ordinary differential equations 1 A linear ODE of general order n has the form a n (x) dn y dx n +a n 1(x) dn 1 y dx n 1 + +a 1(x) dy dx +a 0(x)y = f(x). If f(x) = 0 then the equation is called

More information

Data Fitting - Lecture 6

Data Fitting - Lecture 6 1 Central limit theorem Data Fitting - Lecture 6 Recall that for a sequence of independent, random variables, X i, one can define a mean, µ i, and a variance σi. then the sum of the means also forms a

More information

TD 1: Hilbert Spaces and Applications

TD 1: Hilbert Spaces and Applications Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information