Explicit Lp-norm estimates of infinitely divisible random vectors in Hilbert spaces with applications

Size: px
Start display at page:

Download "Explicit Lp-norm estimates of infinitely divisible random vectors in Hilbert spaces with applications"

Transcription

1 University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School Exlicit L-norm estimates of infinitely divisible random vectors in Hilbert saces with alications Matthew D Turner mturne2@utk.edu Recommended Citation Turner, Matthew D, "Exlicit L-norm estimates of infinitely divisible random vectors in Hilbert saces with alications. " PhD diss., University of Tennessee, 211. htt://trace.tennessee.edu/utk_graddiss/135 This Dissertation is brought to you for free and oen access by the Graduate School at Trace: Tennessee Research and Creative Exchange. It has been acceted for inclusion in Doctoral Dissertations by an authorized administrator of Trace: Tennessee Research and Creative Exchange. For more information, lease contact trace@utk.edu.

2 To the Graduate Council: I am submitting herewith a dissertation written by Matthew D Turner entitled "Exlicit L-norm estimates of infinitely divisible random vectors in Hilbert saces with alications." I have examined the final electronic coy of this dissertation for form and content and recommend that it be acceted in artial fulfillment of the requirements for the degree of Doctor of Philosohy, with a major in Mathematics. We have read this dissertation and recommend its accetance: Xia Chen, Jie Xiong, Mary Leitnaker Original signatures are on file with official student records. Jan Rosinski, Major Professor Acceted for the Council: Dixie L. Thomson Vice Provost and Dean of the Graduate School

3 To the Graduate Council: I am submitting herewith a dissertation written by Matthew D. Turner entitled Exlicit L-norm estimates of infinitely divisible random vectors in Hilbert saces with alications. I have examined the final aer coy of this dissertation for form and content and recommend that it be acceted in artial fulfillment of the requirements for the degree of Doctor of Philosohy, with a major in Mathematics. Jan Rosinski, Major Professor We have read this dissertation and recommend its accetance: Xia Chen Jie Xiong Mary Leitnaker Acceted for the Council: Carolyn R. Hodges Vice Provost and Dean of the Graduate School Original signatures are on file with official student records.

4 Exlicit L-norm estimates of infinitely divisible random vectors in Hilbert saces with alications A Dissertation Presented for the Doctor of Philosohy Degree The University of Tennessee, Knoxville Matthew D. Turner May 211

5 Coyright c 211 by Matthew D. Turner. All rights reserved. ii

6 Dedication I dedicate this dissertation to my wife Heather, sons Jackson and Bryson, arents Tony and Kathy, and brother Mark. Their suort and encouragement have been my motivation. iii

7 Acknowledgments I would like to exress my sincere gratitude to those individuals at the University of Tennessee who have made this dissertation ossible. I am most grateful to my advisor, Dr. Jan Rosiński. It has been my rivilege and leasure to work under his guidance. I would also like to thank the members of my committee, Dr. Xia Chen, Dr. Jie Xiong, and Dr. Mary Leitnaker for their willingness to serve and their critical review of my work. Last, but certainly not least, I would like to thank Dr. William Wade and Mrs. Pam Armentrout for their unending advice and mentoring. iv

8 Abstract I give exlicit estimates of the L-norm of a mean zero infinitely divisible random vector taking values in a Hilbert sace in terms of a certain mixture of the L2- and L-norms of the Levy measure. Using decouling inequalities, the stochastic integral driven by an infinitely divisible random measure is defined. As a first alication utilizing the L-norm estimates, comutation of Ito Isomorhisms for different tyes of stochastic integrals are given. As a second alication, I consider the discrete time signal-observation model in the resence of an alha-stable noise environment. Formulation is given to comute the otimal linear estimate of the system state. v

9 Contents 1 Infinitely Divisible Distributions Introduction L -norm of Hilbert sace valued infinitely divisible random vectors Kalman Filter Kalman filter theory Finite L 2 -norm noise environment α-stable noise environment Exact 1-dimensional filtering Vehicle tracking Aircraft tracking Infinitely Divisible Random Measures Introduction Stochastic integration Sace of integrands The stochastic integral driven by random measures Examles Itô isomorhisms Examles Summary and Future Directions 86 Bibliograhy 88 Aendices 91 A Moments of Indeendent Random Variables and Vectors 92 vi

10 B Modular Saces 111 C Selected Prerequisite Analysis Results 115 C.1 Convergence results C.2 Algebras Vita 12 vii

11 List of Figures 1.1 Exlicit constant in L -norm estimate α-stable Kalman filter for constant velocity 1 dimensional motion D constant velocity model CV D coordinated turn model CT A.1 Grah and aroximations of c and d viii

12 List of Algorithms 1 Kalman filter for Gaussian noise Kalman filter Kalman filter for finite L 2 -norm noise Iteratively reweighted least squares Kalman filter for α-stable noise Kalman filter for 1 dimensional α-stable noise ix

13 Chater 1 Infinitely Divisible Distributions 1.1 Introduction When roducing models of an evolving dynamical system, one is often faced with the challenge of which effects to include in the model and which effects may reasonably be ignored to accurately determine the state of the system. An alternate aroach is to cature these unmodeled effects as random variables or stochastic rocesses, which are often assumed to be Gaussian in the classical literature. Many researchers have sought extensions to such models by relacing the Gaussian assumtion, as there is a need for models caturing observed heavy tailed data exhibiting high variability and/or long range deendency. Infinitely divisible distributions have often been utilized for such modeling. The advantage of infinitely divisible models is their comutability in terms of the Lévy-Khintchine trilet arameterization. Difficulties arise, however, when such distributions have infinite variance, since L 2 -theory and orthogonality are not alicable. Instead, we seek comutation of the L -norm in terms of the Lévy measure. Infinitely divisible distributions are a broad family of distributions containing many named distributions. For examle, the geometric, negative binomial, and Poisson distribution are all discrete distributions in this family. So too are the continuous normal, Cauchy, gamma, F, lognormal, Pareto, Student s t, Weibull, α- stable, and temered α-stable distributions. The following theorem characterizes infinitely divisible random vectors and will be the rimary tool used for investigation 1

14 throughout. For x H, a real Hilbert sace, define x def = x max{ x, 1}. Whenever H = R, we have x if x 1 x = signx if x > 1. Theorem Lévy-Khintchine reresentation. The characteristic function of an infinitely divisible random vector X taking values in a Hilbert sace H can be written as Ee i u, X = ex {i u, b 12 u, Σu + e i u,x 1 i u, x } Qdx, 1.1 H where u, b H, Σ is a nonnegative symmetric oerator on H, and Q is a measure on H such that Q{} = and H x 2 Qdx <. Moreover, the trilet b, Σ, Q comletely determines the distribution of X and this trilet is unique. We call b, Σ, Q the Lévy-Khintchine trilet of X. When Q, X is Gaussian with mean b and covariance matrix Σ and results are well-known. Gaussian case Σ that is of interest to us in the following work. It is the non- When studying infinitely divisible distributions and their associated random vectors, the characteristic function will be our rimary tool. If we define the exonent of 1.1 by Cu def = i u, b 1 2 u, Σu + H e i u,x 1 i u, x Qdx, then C is called the cumulant of X and we have Ee i u,x = e Cu. Moreover, if X is infinitely divisible with Lévy-Khintchine trilet b X, Σ X, Q X and cumulant C X u, Y is infinitely divisible with Lévy-Khintchine trilet b Y, Σ Y, Q Y and cumulant C Y u, and X and Y are indeendent, then X + Y is also infinitely divisible with cumulant C X u + C Y u, and hence, has Lévy-Khintchine trilet b X + b Y, Σ X + Σ Y, Q X + Q Y. As an immediate corollary of the Lévy-Khintchine reresentation, we have that the family of infinitely divisible random vectors are closed under continuous linear transformations and, in articular, rojections of infinitely divisible random vectors are infinitely divisible. More recisely: 2

15 Corollary Let X H be an infinitely divisible random vector with Lévy- Khintchine trilet b, Σ, Q. If F : H H 1 is a continuous linear oerator from the Hilbert sace H into the Hilbert sace H 1, then F X H 1 is also an infinitely divisible random vector with Lévy-Khintchine trilet b F, Σ F, Q F, where and for every B BH 1, def b F = F b + F x F x Qdx, Σ F = F ΣF, H Q F B def = Q {x H : F x B \ {}}. Before roving the corollary, we make a few remarks. First, if Q is a symmetric Lévy measure on H, then Q F is a symmetric Lévy measure on H 1. Second, the integrand in the definition b F is an odd function. Therefore, if b = and Q is symmetric, then b F = also. We oint out these facts since the majority of the examles we consider will make one or both of these assumtions. Proof of Corollary Let F : H H 1 be a continuous linear oerator and let u H 1. Then Ee i u,f X = Ee i F u,x { = ex i F u, b 1 2 F u, ΣF u + e i F u,x 1 i F u, x } Qdx H { = ex i u, F b 1 2 u, F ΣF u + e i u,f x 1 i u, F x } Qdx H { = ex i u, F b + i u, F x i u, F x Qdx 1 H 2 u, F ΣF u + e i u,f x 1 i u, F x } Qdx H { = ex i u, F b + F x F x Qdx 1 H 2 u, F ΣF u + e i u,x 1 i u, x } Q F dx H { 1 = ex i u, b F 1 2 u, Σ F u + e i u,x 1 i u, x } Q F dx. H 1 3

16 In ractice, the normal distribution is justified in its use by the central limit theorem and a oular distribution in modeling because of the ease of comutations when L 2 -orthogonality is alicable. Under the assumtion of non-gaussian distributions, it is often not known how the error should be measured. The next section addresses this question for infinitely divisible distributions. In Chater 2, we will aly this result to obtain the Kalman filter for a discrete time signal-observation model with infinite covariance noise. In Chater 3, we will define the stochastic integral of a stochastic field driven by an infinitely divisible random measure. Itô Isomorhisms will be derived for the stochastic integral. 1.2 L -norm of Hilbert sace valued infinitely divisible random vectors Let X be a mean random vector taking values in a searable Hilbert sace H with characteristic function given by 1.1. When X is urely Gaussian Q, the L -norm of X is controlled by the covariance matrix Σ. In the non-gaussian case, Marcus and Rosiński 21 showed that for X L 1, the L 1 -norm of X is controlled by the Lévy measure Q as.25lq E X 2.125lQ, where the functional l of Q satisfies H { } x 2 min, x Qdx = 1. l 2 l The following theorem generalizes this result to obtain bounds on the L -norm of X. Assume that X is in L for given 1, EX =, and that X does not have a Gaussian comonent. The characteristic function of X can be written as E ex i u, X = ex e i u,x 1 i u, x Qdx. H We assume throughout that Q is symmetric and later remark on removing this restriction by standard symmetrization techniques. Since Q is assumed symmetric, 4

17 the characteristic function of X is E ex i u, X = ex H cos u, x 1 Qdx. It is well known that an infinitely divisible random vector X with Lévy measure Q has finite L -norm if and only if x 1 x Qdx is finite see e.g. Sato 22, Corollary Therefore the Lévy measure Q satisfies H x 2 1 { x <1} + x 1 { x 1} Qdx <. Let the functional l of Q be given by the solution of ξl def = H x 2 l 2 1 { x l <1} + x 1 l { x l 1} Qdx = We remark that x 2 1 { x <1} + x x 2 x if 1 2, 1 { x l} = x 2 x if > 2. We can view l as a secial mixture of the L 2 -norm and L -norm of Q. In the case of non-gaussian infinitely divisible random vectors, the following theorem gives exlicit estimates of the L -norm in terms of the Lévy measure Q. Theorem Let 1. Assume that X L is a mean infinitely divisible random vector without Gaussian comonent, taking values in the Hilbert sace H, and that X has symmetric Lévy measure Q. Then.25l X Kl 1.3 where K def = , if , if 2 < / 4 + K 3, K 4, + 1 1/, if 3 < < , if = 4 K 1, K 2, K 1/ 3, K 4, + 1 1/, if > 4, 5 1.4

18 def where K 1, = /, K 2, = 4 1+1/ 2, K 3, = 4, K 2 2/ , = /2 x + 5 /2, and x solves x = e logx + 1. We remark on imortant cases for the constant K. First, it is the 1 < 2 case that is of most interest to us, as L -theory must be used when working with models containing infinite covariance noise or random driving terms. It is often challenging, if not imossible, to comute such norms directly. Second,we have very nice constants for estimation of the mean, variance, skewness, and kurtosis. Constant K is grahed in Figure 1.1. In rearation of the roof of Theorem 1.2.1, we follow the lead of Marcus and Rosiński 21 and decomose X as X = Y + Z, where Y and Z are indeendent mean zero random vectors with characteristic functions E ex i u, Y = ex cos u, x 1 Qdx x <l and E ex i u, Z = ex cos u, x 1 Qdx, x l resectively. The following four lemmas rovide uer and lower bounds for norms of Y and Z and will be used in the roof of Theorem Lemma We have the following uer bounds on norms of Y : i. If 1 2, then ii. If 2 < 4, then Y Y 4 = Y Y 2 = x <l x <l x 2 Qdx 1/ /4 x 4 Qdx + 3 x Qdx x <l iii. If > 4, then Y K 1, K 2, Y 2 + l, 1.7 6

19 K K Figure 1.1: Exlicit constant in L -norm estimate. where K 1, and K 2, are given in Theorem Proof. 1.5 and 1.6 were roved by Marcus and Rosiński 21, Lemma 1.1. Now let > 4. Let {Y t } t be a Lévy rocess such that Y d 1 = Y. Since the Lévy measure of Y, and hence Y t, is suorted on { x < l}, the samle ath t Y t ω a.s. has no jums of magnitude larger than l on t [, 1]. So there exists Ω Ω with PΩ = 1 such that Y t ω Y t ω l for every ω Ω and for every t [, 1]. For each n N, we may write Y as the sum of n i.i.d. random vectors by Y = d Y 1 Y n 1 n + Y n 1 n Y n 2 n + + def Y 1 Y = n n k=1 k Y, n where Fix ε > and ω Ω. Since k Y def = Y k n n Y k 1. n {t [, 1] : X t ω X t ω l + ε} =, 7

20 standard analysis results give that there exists N = Nω so large that for each n Nω, k Y ω < l + ε n for every 1 k n. For each n N, define a new i.i.d. sequence of bounded random vectors {Y k,n } n k=1 by For each ω Ω, Y k,n for every n Nω. We now have that def = k Y 1 { n k n Y k,n ω = k Y ω n }. Y <l+ε S n def = n Y k,n Y a.s., k=1 since PΩ = 1. Observe that for fixed n, {Y k,n } n k=1 is sequence of symmetric since Q is assumed symmetric i.i.d. random vectors bounded by l + ε. By de la Peña and Giné 1999, Theorem 1.2.5, a Hoffman-Jorgensen tye inequality, for every n N, S n K 1, K 2, S n 2 + max. Y k,n 1 k n But and S n 2 2 = E n k=1 Y 2 k,n n E k=1 Y k,n < l + ε k n 2 Y = E Y 2 for every 1 k n. Hence, for every n N, S n < K 1, K 2, Y 2 + l + ε. By Fatou s lemma and the arbitrariness of ε, Y K 1, K 2, Y 2 + l. 8

21 Lemma We have the following lower bounds on norms of Y : i. If 1 2, then ii. If > 2, then E Y Y Y 2 = E Y l2 + 3E Y 2 2 x <l x 2 Qdx 1/ Proof. Let 1 2. To show 1.8, Holder s inequality gives E Y 2 = E Y Y 4 E Y E Y = E Y 2 4 E Y and hence, E Y E Y E Y Alying 1.6 to the denominator gives E Y E Y l 2 E Y E Y = E Y 2 2, l2 + 3E Y 2 2 roving 1.8. This technique is known as Littlewood s aroach. 1.9 is immediate by 1.5. Lemma We have the following uer bounds on norms of Z: i. If 1 2, then E Z c x Qdx, 1.1 x l where c = If H = R, the constant may be taken as c given by A.12 or A.2 instead. 9

22 ii. If 2 < 3, then E Z x l x Qdx iii. Let λ > /x. If 3 < < 4 or if > 4, then E Z K 3, K 4, x l x l /2 x Qdx 2 + where K 3, and K 4, are given in Theorem iv. Let λ /x. If 3 < < 4 or if > 4, then E Z max log x 1 6, log x x 2 Qdx x 2 Qdx. x l 1.11 x l x l x Qdx, 1.12 x Qdx v. If = 4, then E Z = x l 2 x 4 Qdx + 3 x Qdx x l Proof. First, 1.14 follows exactly as in 1.6 by standard comutation from the characteristic function. Next let λ def = Q x l and {W i } i N a collection of i.i.d. random vectors in H such that PW i A = λ 1 QA { x l}. Let N be a Poisson random variable with mean λ indeendent of {W i } i N. Now Z is a comound Poisson random vector and we have Then Z d = E Z N = E W i = N W i k E W i PN = k k=1 1

23 First let 1 2. By Corollary A.6 if H = R or Theorem A.2 in general, for each k k k N, E W i is bounded above by c E W i = c ke W 1. Utilizing this in 1.16 gives E Z c E W 1 k=1 kpn = k = c E W 1 EN = c E W 1 λ, since N is a Poisson random variable with mean λ. But E W 1 = x λ 1 Qdx and hence, roving 1.1. E Z c x l Next, let 2 < 3. By Theorem A.1, k E W i ke X 1 + = ke X 1 + = ke X 1 + x l x Qdx, 1 E X E X k E S i 1 2 k i 1 E X j 2 j=1 k 2 k E X 1 2 E X Again recalling that N is Poisson, substituting into 1.16 gives E Z k=1 ke X k 2 k E X 1 2 E X 1 2 PN = k 2 = E N E X E N 2 N E X 1 2 E X = λ x λ 1 Qdx + = x l 1 λ 2 x 2 λ 1 Qdx x 2 λ 1 Qdx 4 x l x l x 1 Qdx + x 2 Qdx x 2 Qdx. 4 x l x l x l 11

24 Finally, let > 3. If λ > /x, we have by de la Peña and Giné 1999, Theorem 1.2.5, a Hoffman-Jorgensen tye inequality k E W i 41/+1 2 /+1 4 / /+1 k /+1 W 2 2/+1 i 1 2 k 1/ /+1 E W 2 2/+1 i. 1 By convexity, a + b +1 2 a +1 + b +1 for every a, b. Therefore k E W i Substituting into 1.16 gives E Z = K 3, = K 3, / k W i + 2 k E W i = K 3, k /2 E W 1 2 /2 + ke W1. K 3, k /2 E W 1 2 /2 + ke W1 PN = k k= EN / x l x l /2 x 2 λ Qdx 1 + EN /2 x Qdx 2 λ /2 EN /2 + x l x l x λ 1 Qdx x Qdx To bound λ /2 EN /2, Kwaień and Woyczyński 29, Proosition showed that in the case λ > /x, N /2 N 4 + 5λ.. Hence, λ /2 EN /2 4 /2 λ + 5 /2 4 /2 x + 5 /2 12

25 and we have /2 E Z K 3, x Qdx 2 4 /2 x + 5 /2 + x Qdx x l /2 = K 3, K 4, x Qdx 2 + x l x l x l x Qdx. Now suose that λ /x. For each ω Ω, Holder s inequality gives k W i ω k k 1/ W i ω k 1 1/ W i ω and hence, k E W i k 1 Substituting into 1.16 gives E Z E W 1 k=1 k E W i = k E W 1. k PN = k = x l x λ 1 QdxEN To bound λ 1 EN, Kwaień and Woyczyński 29, Proosition also showed that in the case λ /x, Combining with 1.17 gives λ 1 EN max 1 + E Z max log x 1 8 log x 1 6, log x 6, log x x l x Qdx. 13

26 Lemma If 1, we have the following lower bound on norms of Z: E Z 1 e λ λ x l x Qdx Proof. Let 1. Since we have assumed that Q is symmetric, Lemma A.7 gives Substituting into 1.16 gives k E W i E W 1. E Z k=1 E W 1 PN = k = 1 e λ λ x l x Qdx. We are now ready to rove the uer bound of Theorem using Lemma and Lemma Proof of uer bound of Theorem First assume that 1 2. From 1.5 and 1.1, we have X Y 2 + Z 1/2 x 2 Qdx + c = l x <l x <l x l x 2 1/2 Qdx + l l c x l 1/ x Qdx x 1/ Qdx. l 1.19 By definition 1.2 of l, x l x l Substituting into 1.19 gives X { x <l x l Qdx = 1 x <l x l 2 1/2 Qdx + c 1 x <l 2 Qdx. x l } 2 1/ Qdx l. 14

27 Clearly, by definition 1.2 of l we have and hence, x <l x 2 Qdx 1 l X max a + c 1 a l 1 + c l. a 1 Next, let 2 < 3. Combining 1.6 and 1.11 gives X Y + Z l 2 x 2 Qdx + 3 x <l + x l x <l x <l 2 1/4 x 2 Qdx 1/ x 1 Qdx + x 2 Qdx x 2 Qdx 4 x l x l x 2 x 2 2 1/4 Qdx + 3 Qdx l l 2 l 2 x <l x 1 + Qdx + x l l l. 4 x l Now let 3 < < 4. If λ > /x, 1.12 gives E Z K 3, K 4, x l x 2 l 2 /2 x Qdx 2 + Qdx x l x l x = K 3, K 2 /2 4, Qdx l + x l l 2 K 3, K 4, + 1 l x l x 2 1/ Qdx l l 2 x Qdx x Qdxl l 15

28 and if λ /x, 1.13 gives E Z max 1 + = max 1 + max log 8 log 8 log x 1 x 1 x 1 6, log x 6, log x 6, log x l. x l x l x Qdx x Qdxl l In either case, we have E Z K 3, K 4, + 1 l. This, along with 1.6 gives X Y 4 + Z 4 1/ 4 + K 3, K 4, + 1 1/ l. Now let = 4. Combining 1.6 and 1.14 gives X Y + Z l 2 x 2 Qdx x <l = x l x <l l. x l x 4 Qdx + 3 x 2 l 2 Qdx + 3 x 4 l 4 Qdx + 3 x <l x l x <l x l 2 1/4 x 2 Qdx 2 1/4 x 2 Qdx x 2 2 Qdx l 2 1/4 x 2 2 Qdx l 2 l 1/4 l 16

29 Finally, let > 4. Combining 1.7 and the bounds on Z from the 3 < < 4 case, we have X Y + Z K 1, K 2, Y 2 + l + K 1/ 3, K 4, + 1 1/ l x = K 1, K 2 1/2 2, Qdx l + l + K 1/ x <l l 2 3, K 4, + 1 1/ l K 1, K 2, K 1/ 3, K 4, + 1 1/ l. We are now ready to rove the lower bound of Theorem using Lemma and Lemma Proof of lower bound of Theorem By 1.2, either x 2 x <l l 2 Qdx or x x l l Qdx must be true. Assume 1.2 holds. If 1 2, Lemma A.7 and 1.7 combine to give E X = E Y + Z E Y Since the function t t l 2 + 3t 2 2 is increasing in t, E Y 2 2. l2 + 3E Y 2 2 E X.5l 2 l l = 2.5/2 l l 5 4 and hence, X.25l.25l. 17

30 If > 2, then by Lemma A.7 and 1.9, X Y Y 2 = Now assume 1.21 holds. Then x l x <l x + x l Qdx Now the left hand side simlifies to 1 x 2 2 Qdx.5l >.25l. x l 2 x Qdx l λ, x l x Qdx.5l. where λ = Q x l, and hence, x Qdx l λ = l 4 x l 1 + 2λ. We may combine this with the lower bound inequality in 1.1 and utilize Lemma A.7 as in the above case to get E X E Z 1 e λ λ l λ l 4 and hence, X.25l.25l. In either case, the left hand inequality in 1.3 holds. Recall that we have been working under the assumtion that Q is symmetric. To remove this restriction, assume that X is a mean infinitely divisible random vector in L with Lévy measure Q and let X s be the standard symmetrization of X. The Lévy measure of X s is given by Q s A = QA + Q A and if c solves 1.2 for Q s, we have that c also solves x 2 1 c 2 { x <c} + x c 1 { x c} Qdx = H 18

31 By Corollary A.8 and Theorem 1.2.1, 1 8 c 1 2 Xs X X s Kc. Now let l solve and H x 2 1 l 2 { x <l} + x 1 l { x l} Qdx = k def = 2, if 1 2 2, if > 2. Then k > 1 and if 1 2, we have H min { } x 2 { kl, x 1 Qdx max 2 kl k, 1 } { min 2 k H or if > 2, we have H max { } x 2 { kl, x 1 Qdx max 2 kl k, 1 } { max 2 k H x 2 l 2 x 2 l 2 }, x Qdx = 1 l k = 1 2 }, x Qdx = 1 l k = In either case, c kl since c solves Clearly, l c since l solves We have roven the following corollary to Theorem 1.2.1: Corollary Let 1. Assume that X L is a mean infinitely divisible random vector without Gaussian comonent, taking values in the Hilbert sace H, and that X has Lévy measure Q. Let l be the solution of ξl def = H x 2 1 l 2 { x <l} + x 1 l { x l} Qdx = Then.125l X max{ 2, 2}Kl 1.25 where K is given by 1.4. The last corollary to Theorem that we resent gives quick estimation of the L -norm of X in terms of the functional ξl. 19

32 Corollary Under the assumtions of Theorem 1.2.1, if ξl is given by 1.2, then {.25 min } { ξ1, ξ1 X K max } ξ1, ξ1. Similarly, under the assumtions of Corollary 1.2.6, {.125 min } ξ1, ξ1 X max{ 2, { 2}K max } ξ1, ξ1. Proof. First, suose 1 2. If l < 1, l 2 = x <l x <l = ξ1 x 2 Qdx + x 2 Qdx + x l l x <1 l 2 x Qdx x 2 x Qdx + x 1 x Qdx and l = l 2 x 2 Qdx + x Qdx x <l x l x 2 Qdx + x 2 Qdx + x Qdx x <l l x <1 x 1 = ξ1. If l 1, similar arguments give l 2 ξ1 and l ξ1. In either case we have { min } { ξ1, ξ1 l max } ξ1, ξ1. Similar arguments give the > 2 case. 2

33 Chater 2 Kalman Filter 2.1 Kalman filter theory In his landmark aer, Kalman 196 considered the discrete time signal-observation model x k = F k x k 1 + B k u k + w k y k = H k x k + v k, where x k is the state of an evolving dynamical system at time k, u k is a deterministic control inut to the system, and y k is a noisy linear observation of x k. The noise terms {w k } and {v k } are assumed to be mean Gaussian random vectors with covariance matrices W k and V k, resectively. In filter theory, the objective is to roduce an efficient estimate x k of the unobservable rocess x k using the observed values y 1, y 2,..., y k, which are known at time k. An efficient estimate is one that minimizes some exected loss of the error x k x k. In his aer, Kalman 196 showed that x def k = E x k y 1, y 2,..., y k minimizes the L 2 -norm of the error and gave a recursive formulation for comuting the estimate x k. Under the assumtion of normally distributed noise terms, the orthogonal rojection x k is an affine transformation of the observations y 1, y 2,..., y k. Let ˆx k k 1 be the redicted state of the system at time k, given that the observations y 1, y 2,..., y k 1 are known at time k 1. Then, at time k, the observation y k becomes available and we may udate our state estimate. Let ˆx k k be the udated estimate of the system state at time k once the observation y k has become available. 21

34 We denote by P k k the covariance matrix of the error x k ˆx k k and by P k k 1 the covariance matrix of the error x k ˆx k k 1. The recursively formulated solution given by Kalman 196 to comute x k = ˆx k k is given in Algorithm 1. The filter ˆx k k is a linear combination of the redicted state ˆx k k 1 and the observation y k. The otimal Kalman gain K k in Algorithm 1 is chosen to minimize the L 2 -norm of the error x k ˆx k k and is given by K k = P k k 1 H T k Hk P k k 1 H T k + V k Over the years since this ublication, some research has focused on relacing the noise terms by random vectors with heavy-tailed distributions. Gordon et al. 23, Introduction argued for the need of models allowing heavy tailed error estimates as outlying system state realizations and/or observation measurements have long been known to adversely affect the estimation rocedure. In Gordon et al. 23, the authors assume that the noise terms are ower law distributed and give the Kalman filter in terms of the tail covariance matrices of the noise terms. Stuck 1978 first addressed this model under the assumtion that both x k and y k are R- valued and each noise sequence {w k } and {v k } are α-stable random variables for fixed α. These examles fall under a more general framework for which the noise sequences are assumed to be symmetric infinitely divisible random vectors. In what follows, we establish a general framework to exlore the Kalman filter under this assumtion on the distributions of the noise sequences and demonstrate in two different examles that a solution can often be obtained or aroximated. The first examle assumes that each noise term has finite L 2 -norm, but makes no other assumtions on the distributions. The second examle considers the roblem for α-stable distributed noise sequences, which was first addressed in dimension 1 by Stuck 1978 and then in Gordon et al. 23. In each examle, a tractable aroximate solution is given. Each solution is exact in dimension 1 and agrees with the classic Kalman gain 2.1 when α = 2 in the second examle. Before we begin, we should oint out that these solutions are only otimal in the linear sense. Kalman 196 noted that, under the assumtion that the noise terms are normally distributed, the orthogonal rojection E x k y 1, y 2,..., y k is a linear function of the observations y 1, y 2,..., y k. However, by removing this assumtion, this is no longer the case. In general, the L 2 -orthogonal rojection E x k y 1, y 2,..., y k is non-linear and non-linear filtering theory may give better results. If we are seeking the 22

35 Algorithm 1 Kalman filter for Gaussian noise. 1: Initialize: def ˆx = Ex = P = W 2: Predict: def ˆx k k 1 = F kˆx k 1 k 1 + B k u k unbiased estimate P k k 1 = F k P k 1 k 1 Fk T + W k 3: Udate: K k = P k k 1 Hk T Hk P k k 1 Hk T + V 1 k def ˆx k k = ˆx k k 1 + K k yk H kˆx k k 1 P k k = I K k H k P k k 1 otimal solution x k minimizing, say, the L -norm of the error x k x k, the conventional conditional exected value is no longer even the otimal solution. Instead, it will be the conditional L -exected value E x k y 1, y 2,..., y k that minimizes the L -norm of the error. However, the linear formulation has the desirable roerty of being easily imlemented and are the only estimates we consider. To this end, consider the discrete time signal-observation model x k = F k x k 1 + B k u k + w k y k = H k x k + v k, 2.2 where x k R d, F k R d d, u k R n, B k R d n, y k R m, and H k R m d. Assume that the system noise {w k } k N are indeendent symmetric R d -valued random vectors with the Lévy-Khintchine trilets w k,, Q w,k, k = 1, 2,..., where, for each k, Q w,k is a symmetric Lévy measure on R d, that the observation noise {v k } k N are indeendent symmetric R m -valued random vectors with the Lévy- Khintchine trilets v k,, Q v,k, k = 1, 2,..., where, for each k, Q v,k is a symmetric Lévy measure on R m, and that x R d is a symmetric infinitely divisible random vector with Lévy-Khintchine trilet x,, Q w,, 23

36 where Q w, is a symmetric Lévy measure on R d. Moreover, assume that the sequence of random vectors {x, w 1, v 1, w 2, v 2,... } are mutually indeendent. Finally, assume that for some fixed 1, we have that both for each k =, 1, 2,..., and that R d x 1 { x 1} Q w,k dx < R m x 1 { x 1} Q v,k dx < for each k = 1, 2, 3,.... Restricting ourselves to linear estimates, the Kalman filter algorithm is given by Algorithm 2. Let e k k be the udated estimate error, e k k 1 the redicted estimate error, and observe that e = x, e k k 1 def = x k ˆx k k 1 = F k x k 1 + B k u k + w k F kˆx k 1 k 1 + B k u k = F k e k 1 k 1 + w k, and e k k def = x k ˆx k k = x k ˆx k k 1 + K k yk H kˆx k k 1 = x k ˆx k k 1 K k H k x k + v k + K k H kˆx k k 1 = e k k 1 K k H k xk ˆx k k 1 Kk v k = I d K k H k e k k 1 K k v k = I d K k H k F k e k 1 k 1 + w k Kk v k. First, we remark that e k 1 k 1, w k, and v k are indeendent. Second, K k v k is a symmetric random vector. These two facts, along with Corollary 1.1.2, imly that the udated error e k k is an infinitely divisible random vector on R d and, since each 24

37 Algorithm 2 Kalman filter 1: Initialize: def ˆx = Ex = 2: Predict: def ˆx k k 1 = F kˆx k 1 k 1 + B k u k unbiased estimate 3: Udate: def ˆx k k = ˆx k k 1 + K k yk H kˆx k k 1 Lévy measure Q, is symmetric, the Lévy-Khintchine trilet is given by e,, Q w, def =,, Q, e k k,, Q k 1 I d K k H k F k + Q w,k I d K k H k + Q v,k def K k =,, Q k, k = 1, 2, We recall from Corollary that the subscrit notation Q v,k K k reresents a new Lévy measure on R d given by for every B BR d. Q v,k K k B def = Q v,k {x R m : K k x B \ {}}, magnitude of the error by l k, where l k solves R d x 2 In light of Section 1.2, for every k, we may measure the l 2 k 1 { x l k <1 } + x l k 1 { } x >1 l k Q k dx = The otimal Kalman gain K k R d m is chosen to minimize l k. While no closed form solution exists for such arbitrary Lévy measures, we demonstrate aroximate solutions in the following two examles. The first will deal with the case that = 2 and the Lévy measures are arbitrary. The second examle will deal with the symmetric α-stable case. Often, we will need to comute Q k iteratively, as oosed to recursively as in 2.3. To do so, observe that if Q is a measure on R n, G R q n, and H R r q, then Q G H is a measure on R r and we have, for B B R r, Q G H B = Q G {x R q : Hx B \ {}} = Q {x R n : Gx {x R q : Hx B \ {}} \ {}} = Q {x R n : HGx B \ {}} = Q HG B. 25

38 Using this rule that Q G H = Q HG, we may derive the following formulation of 2.3: Theorem The recursively defined Lévy measure Q k in 2.3 is Q k = Q w, k 1 i= I d K k i H k i F k i k + Q w,j k j 1 j=1 + i= I d K k i H k i F k ii d K j H j Qv,j k j 1 i= I d K k i H k i F k ik j, 2.5 where the roduct notation is understood to be right multilication and equal to the identity matrix when the roduct is emty. 2.2 Finite L 2 -norm noise environment Suose now that = 2, so that each noise w k and v k has finite L 2 -norm. The integrand of 2.4 is no longer iecewise, simlifying comutations. Since each L 2 - norm is finite, the second moments of w k and v k are finite and given by W k def = R d x 2 Q w,k dx and V k def = R m x 2 Q v,k dx, resectively. Then the initial and udated errors are given by l 2 = x 2 Q dx = R d x 2 Q w, dx = W R d and lk 2 = x 2 Q k dx R d = x 2 Q k 1 I d K k H k F k + Q w,k I d K k H k + Q v,k K k dx R d 26

39 = I d K k H k F k x 2 Q k 1 dx R d + I d K k H k x 2 Q w,k dx + R d R m K k x 2 Q v,k. Instead of minimizing l k, will minimize an uer bound on l k. Using the subordinate matrix 2-norm induced by the Euclidean vector norm, we can bound the magnitude of the udated error by l 2 k I d K k H k F k 2 2 Let us define + I d K k H k 2 2 ˆl2 def = l 2 and x 2 Q k 1 dx R d R d x 2 Q w,k dx + K k 2 2 R m x 2 Q v,k dx = I d K k H k F k 2 2 l2 k 1 + I d K k H k 2 2 W k + K k 2 2 V k. ˆl2 k def = I d K k H k F k 2 2 ˆl 2 k 1 + I d K k H k 2 2 W k + K k 2 2 V k. 2.6 The above definitions allow us to iteratively udate our error estimates using only the revious error udate. Now we must determine an aroximating rocedure that minimizes ˆl k k. While the subordinate matrix 2-norm has the desirable roerty that I 2 = 1, it resents a challenge in minimizing ˆl k k. For a matrix A, the Frobenius norm A F def = trace A T A, 2.7 while larger than the subordinate matrix 2-norm A 2 2, is easier to comute. To this end, we may bound 2.6 by ˆl2 k I d K k H k F k 2 F ˆl 2 k 1 + I d K k H k 2 F W k + K k 2 F V k. 2.8 The right hand side is now easy to minimize by recognizing it as a multivariate multile regression minimizing the residual sum of squares of the model [ˆlk 1 I d Wk I d d m ] = K k [ˆlk 1 H k F k Wk H k Vk I m ]. 27

40 It is well known that for a multile multivariate linear regression model Y = BX, the least squares estimate of the matrix B is Y X T XX T 1. Hence K k = 1 ˆl2 k 1Fk T Hk T + W k Hk ˆl2 T k 1H k F k Fk T Hk T + W k H k Hk T + V k I m. The above solution is exact in 1 dimension, since the matrix norms 2 and F are relaced by, and coincides with the classic Kalman filter. The algorithm is summarized in Algorithm α-stable noise environment For the next examle, fix 1 < α < 2 and assume that x is known, so that Q w, = δ. Assume that the signal noise sequence has the form w k = G w k, where G R d q and w k are R q -valued rotationally invariant α-stable random vectors with Lévy measures Q w,k dx def = ck w x α q dx. By Corollary 1.1.2, w k are infinitely divisible R d -valued random vectors with Lévy-Khintchine trilets,, Q w,k def =,, Q w,k G. Assume v k are R m -valued rotationally invariant α-stable random vectors with Lévy measures Q v,k dx def = c v k x α m dx. Before determining the Kalman gain, we will need the following comutations in the analysis of this roblem: Fix 1 < α and let A R d d. I denote by σ the uniform measure on the unit shere. Then x R l d { x <l} Q w,k A dx = 1 Ax 2 1 l 2 { Ax <l} Q w,k G R q dx = 1 AGx l R { AGx <l} ck w x α q dx q = c k w AGru 2 1 l 2 { AGru <l} ru α q σdur q 1 dr S q 1 = c k w AGu 2 1 l 2 { AGu <l/r} σdur 1 α dr S q 1 = c k w AGu 2 1 l 2 {r<l/ AGu, AGu =} r 1 α drσdu S q 1 = c k w 1 AGu α σdu, 2.9 l α 2 α S q 1 28

41 Algorithm 3 Kalman filter for finite L 2 -norm noise. 1: Initialize: ˆx = Ex = ˆl2 = W 2: Predict: ˆx k k 1 = F kˆx k 1 k 1 + B k u k 3: Udate: K k = ˆl2 k 1 F k T HT k + W khk T ˆx k k = ˆx k k 1 + K k yk H kˆx k k 1 ˆl2 k 1 H kf k F T k HT k + W kh k H T k + V ki m 1 ˆl2 k = I d K k H k F k 2 2 ˆl 2 k 1 + I d K k H k 2 2 W k + K k 2 2 V k. and, similarly, x R l d Also, if A R d m, then x R l d 2 2 and, similarly, 1 { x l} Q w,k A dx = c k w 1 AGu α σdu. 2.1 l α α S q 1 1 { x <l} Q v,k A dx = 1 l 2 R m Ax 2 1 { Ax <l} Q v,k dx x R l d 2 2 = 1 Ax l R { Ax <l} c v k x α m dx m = cv k Aru 2 1 l 2 { Aru <l} ru α m σdur m 1 dr S m 1 = cv k Au 2 1 l 2 { Au <l/r} σdur 1 α dr S m 1 = cv k Au 2 1 l 2 {r<l/ Au, Au =} r 1 α drσdu S m 1 = cv k 1 Au α σdu, 2.11 l α 2 α S m 1 1 { x <l} Q v,k A dx = cv k 1 Au α σdu l α 2 α S m 1 We are now ready to comute the estimated error l k. To comute the first integral in the functional equation 2.4 for l k, we use the iterative formulation and the integral 29

42 formulas 2.9 and 2.11 to get 2 x 1 R l 2 { x <lk }Q k dx d k 2 x = 1 R l 2 { x <lk }Q w, k 1 dx d i= k I d K k i H k i F k i 2 k x + 1 R l 2 { x <lk } Q w,j k j 1 dx d i= I d K k i H k i F k ii d K j H j k j=1 2 k x + 1 R l 2 { x <lk } Q v,j k j 1 dx d i= I d K k i H k i F k ik j k j=1 1 k k j 1 = c w lk α j I d K k i H k i F k i I d K j H j Gu 2 α j=1 S q 1 i= 1 k k j 1 α + c v j I d K k i H k i F k i K j u σdu 2 α l α k j=1 S m 1 i= α σdu and similarly, using the integral formulas 2.1 and 2.12, we have that the second integral in the functional equation 2.4 for l k is l k R d x = + l α k l α k 1 { x lk k 1 }Q k dx 1 k α 1 k α j=1 j=1 c w j c v j S q 1 S m 1 I d K k i H k i F k i I d K j H j Gu α I d K k i H k i F k i K j u σdu. k j 1 i= k j 1 i= Since l k satisfies 2.4, the two comutations above combine to give j=1 S q 1 i= α σdu 1 lk α = 2 α + 1 k k j 1 α c v j I d K k i H k i F k i K j u σdu α j=1 S m 1 i= k k j 1 α + cj w I d K k i H k i F k i I d K j H j Gu σdu While no closed form solution exists for K k minimizing l k excet in the 1-dimensional case, we can get a tractable roblem, as we did in the = 2 examle, by minimizing 3

43 an uer bound of l k. Define def 1 W k = ck w 2 α + 1 σ S q 1 α and def 1 V k = c v k 2 α + 1 σ S m 1. α Observe that l α = and that 1 lk α = 2 α + 1 c v k K k u α σdu α S m 1 k 1 k j 1 + c v j j=1 S I d K k H k F k I d K k i H k i F k i K j u m 1 + ck w I d K k H k Gu α σdu S q 1 k 1 k j 1 + cj w I d K k H k F k I d K k i H k i F k i j=1 S q 1 I d K j H j Gu α α σdu σdu I d K k H k F k α 2 lα k 1 + I d K k H k G α 2 W k + K k α 2 V k, 2.14 def where, for a matrix A, A 2 = max x =1 Ax is the subordinate matrix 2-norm induced by the Euclidean vector norm. As we did in the = 2 case, we consider ˆlα k def = I d K k H k F k α 2 ˆl α k 1 + I d K k H k G α 2 W k + K k α 2 V k 2.15 instead of l k. The above iterative definition will allow us to minimize the convenient uer bound ˆl k of l k. As before, using these uer bounds, our error estimates may be udated using only the revious estimated error. Now we must determine an aroximating rocedure that minimizes ˆl k. As we did in the = 2 case, we will minimize the Frobenius norm F see 2.7 for definition 31

44 instead of the subordinate matrix 2-norm 2. To this end, we may bound 2.15 by ˆlα k = I d K k H k F k α 2 ˆl α k 1 + I d K k H k G α 2 W k + K k α 2 V k I d K k H k F k α F ˆl α k 1 + I d K k H k G α F W k + K k α F V k = ˆl k 1 α I d K k H k F k α 2 F I d K k H k F k 2 F + W k I d K k H k G α 2 F I d K k H k G 2 F + V k K k α 2 F K k 2 F, 2.16 the right hand side now being easier to minimize as follows: suose that we have an estimate K t k for K k. Then we may iteratively imrove our estimate of K k by finding K t+1 k w t 1 minimizing I d K t+1 2 k H k F k + w t 2 I d K k H k G 2 F + wt 3 F K t+1 k 2 F, 2.17 where w t 1 w t 2 def = ˆl k 1 α I d K t k def = W k I d K t k H k H k α 2 F k, F G, α 2 F and w t 3 K t = V k def k α 2 F We may recognize 2.17 as a multivariate multile regression minimizing the residual sum of squares of the model [ w t 1 F k w t 2 G d m ]. [ = K t+1 k w t 1 H k F k w t 2 H k G w t 3 I m ]. It is well known that for a multile multivariate linear regression model Y = BX, the least squares estimate of the matrix B is Y X T XX T 1. Hence K t+1 k = w t 1 F k Fk T Hk T + w t 2 GG T Hk T 1 w t 1 H k F k Fk T Hk T + w t 2 H k GG T Hk T + w t 3 I m. 32

45 This aroximating technique is known as iteratively reweighted least squares. See, for examle, Gentle 27, L norms and Iteratively Reweighted Least Squares, g. 232 for an overview. Iteratively reweighted least squares aroximates K k minimizing 2.16 by K k = lim t K t k. The above rocedure is easily imlemented on a comuter and allows us to aroximate the otimal Kalman gain K k using the iteratively reweighted least squares algorithm. We may initialize the algorithm by the least squares solution, where w 1 1, w 1 2, and w 1 3 are taken to be 1, and comute the error to be any matrix norm of the difference K t+1 k K t k. The iteratively reweighted least squares algorithm is imlemented in Algorithm 4 and the Kalman filter is imlemented in Algorithm 5. Algorithm 5 can become unstable over time due to the fact that we are not actually keeing track of the actual errors, but instead, an uer bound on the errors using the matrix norm inequality AB 2 A 2 B 2. At each ste, we used this inequality, and hence our estimated error ˆl k tends to be much larger than the actual error l k. If we are only tracking the target short term, Algorithm 5 works very well. However, for long term tracking we may imrove estimation of x k at the exense of comutational inefficiency by keeing track of more of the matrix multilications in 2.13 instead of aroximating the error by If we are filtering off-line and comutational seed is not a riority, we may use 2.13 for l to imrove erformance. Alternatively, we may erform a statistical analysis to determine how large an overestimate 2.14 tends to be and adjust accordingly Exact 1-dimensional filtering As mentioned above, we can get an exact closed form solution in dimension 1 and demonstrate this here. If d = m = q = 1, then the inequality 2.14 is in fact an equality, since the matrix norms are relaced by, giving l α k = 1 K k H k α F k α l α k K k H k α W k + K k α V k = 1 K k H k α F k α l α k 1 + W k + Kk α V k, 33

46 Algorithm 4 Iteratively reweighted least squares. 1: Initialize K 1 k to the least squares solution with weights of 1: K 1 k = F k Fk T HT k + GGT Hk T Hk F k Fk T HT k + H kgg T Hk T + I 1 m 2: While error > ε and t maxiterations Comute w t 1 = ˆl k 1 α I d K t k H α 2 k F k F w t 2 = W k I d K t k H k G α 2 F w t K t 3 = V k k α 2 F Comute K t+1 k = w t 1 F k Fk T HT k + wt 2 GG T Hk T Comute error = Increment t. 3: K k = K t k. K t+1 k w t 1 1 H k F k Fk T HT k + wt 2 H k GG T Hk T + wt 3 I m K t k F where we have assumed without loss of generality that G 1 it may be absorbed into c w k in dimension 1. Here, W k and V k reduce to W k = 2c w k 1 2 α + 1 α and Let us define l α k k 1 V k = 2c v k 1 2 α + 1. α def = F k α l α k 1 + W k. One can show by arguments similar to those used to derive 2.13 that l k k 1 measures the magnitude of the redicted error e k k 1 just as l k measures the magnitude of the udated error e k k. We then have l α k = 1 K k H k α l α k k 1 + K k α V k and may minimize l k by standard calculus. The derivative of l α k is comuted as d l α k d K k = α 1 K k H k α 1 sign 1 K k H k H k l α k k 1 + α K k α 1 sign K k V k. Equating to and solving, we see that 34

47 Algorithm 5 Kalman filter for α-stable noise. 1: Initialize: ˆx = x ˆlα = 2: Predict: ˆx k k 1 = F kˆx k 1 k 1 + B k u k 3: Udate: Aroximate K k by iteratively reweighted least squares Algorithm 4. ˆx k k = ˆx k k 1 + K k yk H kˆx k k 1 ˆlα k = I d K k H k F k α 2 ˆl α k 1 + I d K k H k G α 2 W k + K k α 2 V k. sign K k = sign 1 K k H k sign H k and K k V 1 α 1 k = 1 K k H k H k 1 α 1 l α α 1 k k 1. Hence, K k V 1 α 1 k = 1 K k H k sign H k H k 1 α 1 l α α 1 k k 1, which is easily solved for K k to get the otimal Kalman gain as K k = sign H k H k 1 α α 1 l α 1 k k 1 H k α α α 1 l α 1 α 1 k k 1 + V 1 k If we take α = 2 in the above equation, we have exactly the classic Kalman gain 2.1 ignoring the fact that the disersion V k, laying a similar role as variance in the normal distribution, is infinite. The Kalman filter algorithm is imlemented in Algorithm 6. As oosed to the higher dimensional solutions of the Kalman filter for finite L 2 -norm noise and α-stable noise I have given, the Kalman gain 2.18 is exact in the sense that it minimizes the error l k, not an uer bound on l k. We next resent simulations utilizing these results for the α-stable noise environment Vehicle tracking Suose we are tracking a vehicle moving in a straight line. The vehicle s osition is measured every T seconds, at which time we can change the velocity u = u k+1. Then 35

48 Algorithm 6 Kalman filter for 1 dimensional α-stable noise. 1: Initialize: ˆx = Ex = l α = W 2: Predict: ˆx k k 1 = F kˆx k 1 k 1 + B k u k lk k 1 α = F k α lk 1 α + W k 3: Udate: K k = sign H k H k 1 α 1 l α α 1 k k 1 ˆx k k = ˆx k k 1 + K k yk H kˆx k k 1 l α k = 1 K kh k α l α k k 1 + K k α V k H k α α α 1 l α 1 α 1 k k 1 + V 1 k 1 the osition of the vehicle is modeled by x k = x k 1 + T u k. In actuality, the osition of the vehicle at each time is erturbed by circumstances beyond our control otholes, gusts of wind, etc.. A more realistic model is x k = x k 1 + T u k + w k, where w k is a random noise. At each time increment, we observe the osition of the vehicle, which is also contaminated by a random noise. The observation y k is modeled by y k = x k + v k, where v k is a random noise. Our objective is to efficiently estimate the osition of the vehicle at time k. First, we could comletely ignore our observation y k and redict the osition of the vehicle to be ˆx k = ˆx k 1 + T u k. Or, we could comletely ignore the dynamics of the system and redict the osition of the vehicle to be the observation ˆx k = y k. In actuality, we would like to use each iece of information: the dynamics of the system and the observation. If we restrict to linear estimates and assume that {w k } and {v k } are indeendent symmetric α-stable random variables, then we may aly the Kalman filter Algorithm 6 to estimate the osition of the vehicle x k k at time k. Figure 2.1 is a simulation with arameters = 1, α = 1.4, T =.1, and constant velocity u k = u = 4 throughout every time increment. The 36

49 disersion arameter c w k of w k is taken to be small c w k =.1. This reresents that the otholes, gusts of wind, etc. have minimal effect on the osition of the vehicle. The disersion arameter c v k of v k is taken to be large in comarison to vk w cv k = 5. This arameter reresents the known accuracy of the gs technology. The classic Kalman filter Algorithm 1 weights the observation to heavily in this case, as it does not exect such extreme tail events that occur under an α-stable distribution. We can see in Figure 2.1 the tail events that occur in the observation noise. Such tail events have robability under the Gaussian distribution and are not exected in the classic Kalman filter Aircraft tracking As a last examle, we consider two models commonly emloyed in the tracking of an aircraft. Ignoring altitude, the system state being tracked is x = x 1, ẋ 1, x 2, ẋ 2. The system dynamics of a maneuvering aircraft are modeled by the constant velocity CV model and the coordinated turn CT model see e.g. Bar-Shalom et al. 21, Section 11.7 for an overview. The models are T 2 2 x k = F x k 1 + T T 2 2 w k, T where the system dynamics matrix for the CV model is and for the CT model is F def = F def = 1 T 1 1 T 1 sin ωt 1 cos ωt 1 ω ω cos ωt sin ωt 1 cos ωt sin ωt 1 ω ω. sin ωt cos ωt 37

50 In ractice, the turn rate ω is unknown. One would need to consider the augmented state matrix x k = x 1, ẋ 1, x 2, ẋ 2, ω, for which the system model is now non-linear. Standard ractice is to then aroximate by a first order exansion. We assume here that the turn rate ω is constant and known for simulation uroses. The signal noise w k is a 2-dimensional rotationally invariant α-stable random vector. At each time increment, we observe the osition of the aircraft, which is also contaminated by a 2-dimensional rotationally invariant α-stable random noise. Then the observation y k is [ ] 1 y k = x k + v k. 1 We aly Algorithm 5 to estimate the osition of the vehicle by ˆx k k. Figure 2.2 and Figure 2.3 are simulations of the CV and CT models resectively. The arameters were taken as = 1, α = 1.4, T =.1, c w k =.1, and ck v = 3. As in the vehicle tracking examle, the classic Kalman filter can erform oorly when tail events occur. If we mistakenly believe that the noise is normally distributed, then we do not anticiate such extreme tail events exerienced in the noisy observation. Therefore, the classic Kalman filter is again weighting the observation to heavily and undererforms the α-stable Kalman filter Algorithm 5. 38

51 8 1 dimensional constant velocity motion in an α Stable noise environment osition Signal x α Kalman Gaussian Kalman time 6 4 Filtering of observation noise α Kalman Observation osition time Figure 2.1: α-stable Kalman filter for constant velocity 1 dimensional motion. 39

52 x 1 coordinate of aircraft x 2 coordinate of aircraft Constant velocity model of 2 dimensional motion in an α Stable noise environment Signal x α Kalman Gaussian Kalman x coordinate of aircraft 1 2 Filtering of observation noise 2 α Kalman Observation time x 2 coordinate of aircraft 2 1 Filtering of observation noise 1 α Kalman Observation time Figure 2.2: 2-D constant velocity model CV. 4

53 x 2 coordinate of aircraft x 1 coordinate of aircraft Coordinated turn model of 2 dimensional motion in an α Stable noise environment Signal x α Kalman Gaussian Kalman x coordinate of aircraft 1 1 Filtering of observation noise 1 α Kalman Observation time x 2 coordinate of aircraft 2 Filtering of observation noise 2 α Kalman Observation time Figure 2.3: 2-D coordinated turn model CT. 41

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

Stochastic integration II: the Itô integral

Stochastic integration II: the Itô integral 13 Stochastic integration II: the Itô integral We have seen in Lecture 6 how to integrate functions Φ : (, ) L (H, E) with resect to an H-cylindrical Brownian motion W H. In this lecture we address the

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Elementary theory of L p spaces

Elementary theory of L p spaces CHAPTER 3 Elementary theory of L saces 3.1 Convexity. Jensen, Hölder, Minkowski inequality. We begin with two definitions. A set A R d is said to be convex if, for any x 0, x 1 2 A x = x 0 + (x 1 x 0 )

More information

Convex Optimization methods for Computing Channel Capacity

Convex Optimization methods for Computing Channel Capacity Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

State Estimation with ARMarkov Models

State Estimation with ARMarkov Models Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,

More information

Research Article An iterative Algorithm for Hemicontractive Mappings in Banach Spaces

Research Article An iterative Algorithm for Hemicontractive Mappings in Banach Spaces Abstract and Alied Analysis Volume 2012, Article ID 264103, 11 ages doi:10.1155/2012/264103 Research Article An iterative Algorithm for Hemicontractive Maings in Banach Saces Youli Yu, 1 Zhitao Wu, 2 and

More information

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction GOOD MODELS FOR CUBIC SURFACES ANDREAS-STEPHAN ELSENHANS Abstract. This article describes an algorithm for finding a model of a hyersurface with small coefficients. It is shown that the aroach works in

More information

HENSEL S LEMMA KEITH CONRAD

HENSEL S LEMMA KEITH CONRAD HENSEL S LEMMA KEITH CONRAD 1. Introduction In the -adic integers, congruences are aroximations: for a and b in Z, a b mod n is the same as a b 1/ n. Turning information modulo one ower of into similar

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

Applications to stochastic PDE

Applications to stochastic PDE 15 Alications to stochastic PE In this final lecture we resent some alications of the theory develoed in this course to stochastic artial differential equations. We concentrate on two secific examles:

More information

1 Probability Spaces and Random Variables

1 Probability Spaces and Random Variables 1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1

More information

Analysis of some entrance probabilities for killed birth-death processes

Analysis of some entrance probabilities for killed birth-death processes Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

CHAPTER 5 TANGENT VECTORS

CHAPTER 5 TANGENT VECTORS CHAPTER 5 TANGENT VECTORS In R n tangent vectors can be viewed from two ersectives (1) they cature the infinitesimal movement along a ath, the direction, and () they oerate on functions by directional

More information

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables Imroved Bounds on Bell Numbers and on Moments of Sums of Random Variables Daniel Berend Tamir Tassa Abstract We rovide bounds for moments of sums of sequences of indeendent random variables. Concentrating

More information

Uniform Law on the Unit Sphere of a Banach Space

Uniform Law on the Unit Sphere of a Banach Space Uniform Law on the Unit Shere of a Banach Sace by Bernard Beauzamy Société de Calcul Mathématique SA Faubourg Saint Honoré 75008 Paris France Setember 008 Abstract We investigate the construction of a

More information

p-adic Measures and Bernoulli Numbers

p-adic Measures and Bernoulli Numbers -Adic Measures and Bernoulli Numbers Adam Bowers Introduction The constants B k in the Taylor series exansion t e t = t k B k k! k=0 are known as the Bernoulli numbers. The first few are,, 6, 0, 30, 0,

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

On Doob s Maximal Inequality for Brownian Motion

On Doob s Maximal Inequality for Brownian Motion Stochastic Process. Al. Vol. 69, No., 997, (-5) Research Reort No. 337, 995, Det. Theoret. Statist. Aarhus On Doob s Maximal Inequality for Brownian Motion S. E. GRAVERSEN and G. PESKIR If B = (B t ) t

More information

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material Robustness of classifiers to uniform l and Gaussian noise Sulementary material Jean-Yves Franceschi Ecole Normale Suérieure de Lyon LIP UMR 5668 Omar Fawzi Ecole Normale Suérieure de Lyon LIP UMR 5668

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

LEIBNIZ SEMINORMS IN PROBABILITY SPACES

LEIBNIZ SEMINORMS IN PROBABILITY SPACES LEIBNIZ SEMINORMS IN PROBABILITY SPACES ÁDÁM BESENYEI AND ZOLTÁN LÉKA Abstract. In this aer we study the (strong) Leibniz roerty of centered moments of bounded random variables. We shall answer a question

More information

#A47 INTEGERS 15 (2015) QUADRATIC DIOPHANTINE EQUATIONS WITH INFINITELY MANY SOLUTIONS IN POSITIVE INTEGERS

#A47 INTEGERS 15 (2015) QUADRATIC DIOPHANTINE EQUATIONS WITH INFINITELY MANY SOLUTIONS IN POSITIVE INTEGERS #A47 INTEGERS 15 (015) QUADRATIC DIOPHANTINE EQUATIONS WITH INFINITELY MANY SOLUTIONS IN POSITIVE INTEGERS Mihai Ciu Simion Stoilow Institute of Mathematics of the Romanian Academy, Research Unit No. 5,

More information

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

The Longest Run of Heads

The Longest Run of Heads The Longest Run of Heads Review by Amarioarei Alexandru This aer is a review of older and recent results concerning the distribution of the longest head run in a coin tossing sequence, roblem that arise

More information

Positive decomposition of transfer functions with multiple poles

Positive decomposition of transfer functions with multiple poles Positive decomosition of transfer functions with multile oles Béla Nagy 1, Máté Matolcsi 2, and Márta Szilvási 1 Deartment of Analysis, Technical University of Budaest (BME), H-1111, Budaest, Egry J. u.

More information

c Copyright by Helen J. Elwood December, 2011

c Copyright by Helen J. Elwood December, 2011 c Coyright by Helen J. Elwood December, 2011 CONSTRUCTING COMPLEX EQUIANGULAR PARSEVAL FRAMES A Dissertation Presented to the Faculty of the Deartment of Mathematics University of Houston In Partial Fulfillment

More information

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Journal of Modern Alied Statistical Methods Volume Issue Article 7 --03 A Comarison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Ghadban Khalaf King Khalid University, Saudi

More information

Section 0.10: Complex Numbers from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative

Section 0.10: Complex Numbers from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Section 0.0: Comlex Numbers from Precalculus Prerequisites a.k.a. Chater 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license.

More information

GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS

GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS International Journal of Analysis Alications ISSN 9-8639 Volume 5, Number (04), -9 htt://www.etamaths.com GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS ILYAS ALI, HU YANG, ABDUL SHAKOOR Abstract.

More information

SCHUR S LEMMA AND BEST CONSTANTS IN WEIGHTED NORM INEQUALITIES. Gord Sinnamon The University of Western Ontario. December 27, 2003

SCHUR S LEMMA AND BEST CONSTANTS IN WEIGHTED NORM INEQUALITIES. Gord Sinnamon The University of Western Ontario. December 27, 2003 SCHUR S LEMMA AND BEST CONSTANTS IN WEIGHTED NORM INEQUALITIES Gord Sinnamon The University of Western Ontario December 27, 23 Abstract. Strong forms of Schur s Lemma and its converse are roved for mas

More information

The non-stochastic multi-armed bandit problem

The non-stochastic multi-armed bandit problem Submitted for journal ublication. The non-stochastic multi-armed bandit roblem Peter Auer Institute for Theoretical Comuter Science Graz University of Technology A-8010 Graz (Austria) auer@igi.tu-graz.ac.at

More information

Asymptotically Optimal Simulation Allocation under Dependent Sampling

Asymptotically Optimal Simulation Allocation under Dependent Sampling Asymtotically Otimal Simulation Allocation under Deendent Samling Xiaoing Xiong The Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815, USA, xiaoingx@yahoo.com Sandee

More information

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law On Isoerimetric Functions of Probability Measures Having Log-Concave Densities with Resect to the Standard Normal Law Sergey G. Bobkov Abstract Isoerimetric inequalities are discussed for one-dimensional

More information

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation Uniformly best wavenumber aroximations by satial central difference oerators: An initial investigation Vitor Linders and Jan Nordström Abstract A characterisation theorem for best uniform wavenumber aroximations

More information

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS #A13 INTEGERS 14 (014) ON THE LEAST SIGNIFICANT ADIC DIGITS OF CERTAIN LUCAS NUMBERS Tamás Lengyel Deartment of Mathematics, Occidental College, Los Angeles, California lengyel@oxy.edu Received: 6/13/13,

More information

COMMUNICATION BETWEEN SHAREHOLDERS 1

COMMUNICATION BETWEEN SHAREHOLDERS 1 COMMUNICATION BTWN SHARHOLDRS 1 A B. O A : A D Lemma B.1. U to µ Z r 2 σ2 Z + σ2 X 2r ω 2 an additive constant that does not deend on a or θ, the agents ayoffs can be written as: 2r rθa ω2 + θ µ Y rcov

More information

Commutators on l. D. Dosev and W. B. Johnson

Commutators on l. D. Dosev and W. B. Johnson Submitted exclusively to the London Mathematical Society doi:10.1112/0000/000000 Commutators on l D. Dosev and W. B. Johnson Abstract The oerators on l which are commutators are those not of the form λi

More information

Multiplicity of weak solutions for a class of nonuniformly elliptic equations of p-laplacian type

Multiplicity of weak solutions for a class of nonuniformly elliptic equations of p-laplacian type Nonlinear Analysis 7 29 536 546 www.elsevier.com/locate/na Multilicity of weak solutions for a class of nonuniformly ellitic equations of -Lalacian tye Hoang Quoc Toan, Quô c-anh Ngô Deartment of Mathematics,

More information

IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES

IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES OHAD GILADI AND ASSAF NAOR Abstract. It is shown that if (, ) is a Banach sace with Rademacher tye 1 then for every n N there exists

More information

Randomly Weighted Series of Contractions in Hilbert Spaces

Randomly Weighted Series of Contractions in Hilbert Spaces Math. Scand. Vol. 79, o. 2, 996, (263-282) Prerint Ser. o. 5, 994, Math. Inst. Aarhus. Introduction Randomly Weighted Series of Contractions in Hilbert Saces G. PESKIR, D. SCHEIDER, M. WEBER Conditions

More information

CERIAS Tech Report The period of the Bell numbers modulo a prime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education

CERIAS Tech Report The period of the Bell numbers modulo a prime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education CERIAS Tech Reort 2010-01 The eriod of the Bell numbers modulo a rime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education and Research Information Assurance and Security Purdue University,

More information

#A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS

#A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS #A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS Norbert Hegyvári ELTE TTK, Eötvös University, Institute of Mathematics, Budaest, Hungary hegyvari@elte.hu François Hennecart Université

More information

STABILITY ANALYSIS AND CONTROL OF STOCHASTIC DYNAMIC SYSTEMS USING POLYNOMIAL CHAOS. A Dissertation JAMES ROBERT FISHER

STABILITY ANALYSIS AND CONTROL OF STOCHASTIC DYNAMIC SYSTEMS USING POLYNOMIAL CHAOS. A Dissertation JAMES ROBERT FISHER STABILITY ANALYSIS AND CONTROL OF STOCHASTIC DYNAMIC SYSTEMS USING POLYNOMIAL CHAOS A Dissertation by JAMES ROBERT FISHER Submitted to the Office of Graduate Studies of Texas A&M University in artial fulfillment

More information

On Z p -norms of random vectors

On Z p -norms of random vectors On Z -norms of random vectors Rafa l Lata la Abstract To any n-dimensional random vector X we may associate its L -centroid body Z X and the corresonding norm. We formulate a conjecture concerning the

More information

Khinchine inequality for slightly dependent random variables

Khinchine inequality for slightly dependent random variables arxiv:170808095v1 [mathpr] 7 Aug 017 Khinchine inequality for slightly deendent random variables Susanna Sektor Abstract We rove a Khintchine tye inequality under the assumtion that the sum of Rademacher

More information

THE SET CHROMATIC NUMBER OF RANDOM GRAPHS

THE SET CHROMATIC NUMBER OF RANDOM GRAPHS THE SET CHROMATIC NUMBER OF RANDOM GRAPHS ANDRZEJ DUDEK, DIETER MITSCHE, AND PAWE L PRA LAT Abstract. In this aer we study the set chromatic number of a random grah G(n, ) for a wide range of = (n). We

More information

arxiv:cond-mat/ v2 25 Sep 2002

arxiv:cond-mat/ v2 25 Sep 2002 Energy fluctuations at the multicritical oint in two-dimensional sin glasses arxiv:cond-mat/0207694 v2 25 Se 2002 1. Introduction Hidetoshi Nishimori, Cyril Falvo and Yukiyasu Ozeki Deartment of Physics,

More information

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelsach, K. P. White, and M. Fu, eds. EFFICIENT RARE EVENT SIMULATION FOR HEAVY-TAILED SYSTEMS VIA CROSS ENTROPY Jose

More information

Multi-Operation Multi-Machine Scheduling

Multi-Operation Multi-Machine Scheduling Multi-Oeration Multi-Machine Scheduling Weizhen Mao he College of William and Mary, Williamsburg VA 3185, USA Abstract. In the multi-oeration scheduling that arises in industrial engineering, each job

More information

Location of solutions for quasi-linear elliptic equations with general gradient dependence

Location of solutions for quasi-linear elliptic equations with general gradient dependence Electronic Journal of Qualitative Theory of Differential Equations 217, No. 87, 1 1; htts://doi.org/1.14232/ejqtde.217.1.87 www.math.u-szeged.hu/ejqtde/ Location of solutions for quasi-linear ellitic equations

More information

An Improved Generalized Estimation Procedure of Current Population Mean in Two-Occasion Successive Sampling

An Improved Generalized Estimation Procedure of Current Population Mean in Two-Occasion Successive Sampling Journal of Modern Alied Statistical Methods Volume 15 Issue Article 14 11-1-016 An Imroved Generalized Estimation Procedure of Current Poulation Mean in Two-Occasion Successive Samling G. N. Singh Indian

More information

1. INTRODUCTION. Fn 2 = F j F j+1 (1.1)

1. INTRODUCTION. Fn 2 = F j F j+1 (1.1) CERTAIN CLASSES OF FINITE SUMS THAT INVOLVE GENERALIZED FIBONACCI AND LUCAS NUMBERS The beautiful identity R.S. Melham Deartment of Mathematical Sciences, University of Technology, Sydney PO Box 23, Broadway,

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

Generalized Coiflets: A New Family of Orthonormal Wavelets

Generalized Coiflets: A New Family of Orthonormal Wavelets Generalized Coiflets A New Family of Orthonormal Wavelets Dong Wei, Alan C Bovik, and Brian L Evans Laboratory for Image and Video Engineering Deartment of Electrical and Comuter Engineering The University

More information

The inverse Goldbach problem

The inverse Goldbach problem 1 The inverse Goldbach roblem by Christian Elsholtz Submission Setember 7, 2000 (this version includes galley corrections). Aeared in Mathematika 2001. Abstract We imrove the uer and lower bounds of the

More information

DIFFERENTIAL GEOMETRY. LECTURES 9-10,

DIFFERENTIAL GEOMETRY. LECTURES 9-10, DIFFERENTIAL GEOMETRY. LECTURES 9-10, 23-26.06.08 Let us rovide some more details to the definintion of the de Rham differential. Let V, W be two vector bundles and assume we want to define an oerator

More information

Positivity, local smoothing and Harnack inequalities for very fast diffusion equations

Positivity, local smoothing and Harnack inequalities for very fast diffusion equations Positivity, local smoothing and Harnack inequalities for very fast diffusion equations Dedicated to Luis Caffarelli for his ucoming 60 th birthday Matteo Bonforte a, b and Juan Luis Vázquez a, c Abstract

More information

SOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES

SOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES Kragujevac Journal of Mathematics Volume 411) 017), Pages 33 55. SOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES SILVESTRU SEVER DRAGOMIR 1, Abstract. Some new trace ineualities for oerators in

More information

TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES

TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES Khayyam J. Math. DOI:10.22034/kjm.2019.84207 TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES ISMAEL GARCÍA-BAYONA Communicated by A.M. Peralta Abstract. In this aer, we define two new Schur and

More information

Introduction to Banach Spaces

Introduction to Banach Spaces CHAPTER 8 Introduction to Banach Saces 1. Uniform and Absolute Convergence As a rearation we begin by reviewing some familiar roerties of Cauchy sequences and uniform limits in the setting of metric saces.

More information

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation Paer C Exact Volume Balance Versus Exact Mass Balance in Comositional Reservoir Simulation Submitted to Comutational Geosciences, December 2005. Exact Volume Balance Versus Exact Mass Balance in Comositional

More information

Adaptive estimation with change detection for streaming data

Adaptive estimation with change detection for streaming data Adative estimation with change detection for streaming data A thesis resented for the degree of Doctor of Philosohy of the University of London and the Diloma of Imerial College by Dean Adam Bodenham Deartment

More information

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem Combinatorics of tomost discs of multi-eg Tower of Hanoi roblem Sandi Klavžar Deartment of Mathematics, PEF, Unversity of Maribor Koroška cesta 160, 000 Maribor, Slovenia Uroš Milutinović Deartment of

More information

Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Application

Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Application BULGARIA ACADEMY OF SCIECES CYBEREICS AD IFORMAIO ECHOLOGIES Volume 9 o 3 Sofia 009 Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Alication Svetoslav Savov Institute of Information

More information

On the capacity of the general trapdoor channel with feedback

On the capacity of the general trapdoor channel with feedback On the caacity of the general tradoor channel with feedback Jui Wu and Achilleas Anastasooulos Electrical Engineering and Comuter Science Deartment University of Michigan Ann Arbor, MI, 48109-1 email:

More information

BEST CONSTANT IN POINCARÉ INEQUALITIES WITH TRACES: A FREE DISCONTINUITY APPROACH

BEST CONSTANT IN POINCARÉ INEQUALITIES WITH TRACES: A FREE DISCONTINUITY APPROACH BEST CONSTANT IN POINCARÉ INEQUALITIES WITH TRACES: A FREE DISCONTINUITY APPROACH DORIN BUCUR, ALESSANDRO GIACOMINI, AND PAOLA TREBESCHI Abstract For Ω R N oen bounded and with a Lischitz boundary, and

More information

1 Extremum Estimators

1 Extremum Estimators FINC 9311-21 Financial Econometrics Handout Jialin Yu 1 Extremum Estimators Let θ 0 be a vector of k 1 unknown arameters. Extremum estimators: estimators obtained by maximizing or minimizing some objective

More information

t 0 Xt sup X t p c p inf t 0

t 0 Xt sup X t p c p inf t 0 SHARP MAXIMAL L -ESTIMATES FOR MARTINGALES RODRIGO BAÑUELOS AND ADAM OSȨKOWSKI ABSTRACT. Let X be a suermartingale starting from 0 which has only nonnegative jums. For each 0 < < we determine the best

More information

F(p) y + 3y + 2y = δ(t a) y(0) = 0 and y (0) = 0.

F(p) y + 3y + 2y = δ(t a) y(0) = 0 and y (0) = 0. Page 5- Chater 5: Lalace Transforms The Lalace Transform is a useful tool that is used to solve many mathematical and alied roblems. In articular, the Lalace transform is a technique that can be used to

More information

Plotting the Wilson distribution

Plotting the Wilson distribution , Survey of English Usage, University College London Setember 018 1 1. Introduction We have discussed the Wilson score interval at length elsewhere (Wallis 013a, b). Given an observed Binomial roortion

More information

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule The Grah Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule STEFAN D. BRUDA Deartment of Comuter Science Bisho s University Lennoxville, Quebec J1M 1Z7 CANADA bruda@cs.ubishos.ca

More information

On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm

On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm Gabriel Noriega, José Restreo, Víctor Guzmán, Maribel Giménez and José Aller Universidad Simón Bolívar Valle de Sartenejas,

More information

A Social Welfare Optimal Sequential Allocation Procedure

A Social Welfare Optimal Sequential Allocation Procedure A Social Welfare Otimal Sequential Allocation Procedure Thomas Kalinowsi Universität Rostoc, Germany Nina Narodytsa and Toby Walsh NICTA and UNSW, Australia May 2, 201 Abstract We consider a simle sequential

More information

Research Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inputs

Research Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inputs Abstract and Alied Analysis Volume 203 Article ID 97546 5 ages htt://dxdoiorg/055/203/97546 Research Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inuts Hong

More information

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS #A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS Ramy F. Taki ElDin Physics and Engineering Mathematics Deartment, Faculty of Engineering, Ain Shams University, Cairo, Egyt

More information

MATH 6210: SOLUTIONS TO PROBLEM SET #3

MATH 6210: SOLUTIONS TO PROBLEM SET #3 MATH 6210: SOLUTIONS TO PROBLEM SET #3 Rudin, Chater 4, Problem #3. The sace L (T) is searable since the trigonometric olynomials with comlex coefficients whose real and imaginary arts are rational form

More information

A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST BASED ON THE WEIBULL DISTRIBUTION

A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST BASED ON THE WEIBULL DISTRIBUTION O P E R A T I O N S R E S E A R C H A N D D E C I S I O N S No. 27 DOI:.5277/ord73 Nasrullah KHAN Muhammad ASLAM 2 Kyung-Jun KIM 3 Chi-Hyuck JUN 4 A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST

More information

Slash Distributions and Applications

Slash Distributions and Applications CHAPTER 2 Slash Distributions and Alications 2.1 Introduction The concet of slash distributions was introduced by Kafadar (1988) as a heavy tailed alternative to the normal distribution. Further literature

More information

On the minimax inequality and its application to existence of three solutions for elliptic equations with Dirichlet boundary condition

On the minimax inequality and its application to existence of three solutions for elliptic equations with Dirichlet boundary condition ISSN 1 746-7233 England UK World Journal of Modelling and Simulation Vol. 3 (2007) No. 2. 83-89 On the minimax inequality and its alication to existence of three solutions for ellitic equations with Dirichlet

More information

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies Online Aendix to Accomany AComarisonof Traditional and Oen-Access Aointment Scheduling Policies Lawrence W. Robinson Johnson Graduate School of Management Cornell University Ithaca, NY 14853-6201 lwr2@cornell.edu

More information

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)

More information

Sampling and Distortion Tradeoffs for Bandlimited Periodic Signals

Sampling and Distortion Tradeoffs for Bandlimited Periodic Signals Samling and Distortion radeoffs for Bandlimited Periodic Signals Elaheh ohammadi and Farokh arvasti Advanced Communications Research Institute ACRI Deartment of Electrical Engineering Sharif University

More information

Hidden Predictors: A Factor Analysis Primer

Hidden Predictors: A Factor Analysis Primer Hidden Predictors: A Factor Analysis Primer Ryan C Sanchez Western Washington University Factor Analysis is a owerful statistical method in the modern research sychologist s toolbag When used roerly, factor

More information

1-way quantum finite automata: strengths, weaknesses and generalizations

1-way quantum finite automata: strengths, weaknesses and generalizations 1-way quantum finite automata: strengths, weaknesses and generalizations arxiv:quant-h/9802062v3 30 Se 1998 Andris Ambainis UC Berkeley Abstract Rūsiņš Freivalds University of Latvia We study 1-way quantum

More information

Efficient algorithms for the smallest enclosing ball problem

Efficient algorithms for the smallest enclosing ball problem Efficient algorithms for the smallest enclosing ball roblem Guanglu Zhou, Kim-Chuan Toh, Jie Sun November 27, 2002; Revised August 4, 2003 Abstract. Consider the roblem of comuting the smallest enclosing

More information

Boundary regularity for elliptic problems with continuous coefficients

Boundary regularity for elliptic problems with continuous coefficients Boundary regularity for ellitic roblems with continuous coefficients Lisa Beck Abstract: We consider weak solutions of second order nonlinear ellitic systems in divergence form or of quasi-convex variational

More information

Ž. Ž. Ž. 2 QUADRATIC AND INVERSE REGRESSIONS FOR WISHART DISTRIBUTIONS 1

Ž. Ž. Ž. 2 QUADRATIC AND INVERSE REGRESSIONS FOR WISHART DISTRIBUTIONS 1 The Annals of Statistics 1998, Vol. 6, No., 573595 QUADRATIC AND INVERSE REGRESSIONS FOR WISHART DISTRIBUTIONS 1 BY GERARD LETAC AND HELENE ` MASSAM Universite Paul Sabatier and York University If U and

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized BIT 38 :2 (998), 283{292. QUASI-MONTE CARLO METHODS FOR NUMERICAL INTEGRATION OF MULTIVARIATE HAAR SERIES II KARL ENTACHER y Deartment of Mathematics, University of Salzburg, Hellbrunnerstr. 34 A-52 Salzburg,

More information