Weak convergence for a type of conditional expectation: application to the inference for a class ofasset price models
|
|
- Byron Mathews
- 5 years ago
- Views:
Transcription
1 Nonlinear Analysis 6 (25) Weak convergence for a type of conditional expectation: application to the inference for a class ofasset price models Michael A. Kouritzin a, Yong Zeng b, a Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Al, Canada T6G 2G1 b Department of Mathematics and Statistics, University of Missouri at Kansas City, Kansas City, MO 6411, USA Received 12 March 24; accepted 1 July 24 Abstract We prove weak convergence ofa type ofconditional expectation, which provides a straightforward proofofgoggin s Theorem and further proves the consistency of(integrated) likelihood, posterior, and Bayes factor for a class of transactional asset price models recently developed. 24 Elsevier Ltd. All rights reserved. Keywords: Conditional expectation; Filtering; Counting process 1. Introduction The weak convergence of (X N,Y N ) on a probability space (Ω N, F N,P N ) to (X, Y ) on (Ω, F,P) does not necessarily imply that E P N [F(X N ) Y N E[F(X) Y, even for bounded, continuous functions, F. However, Goggin [2 showed that the limit ofthe conditional expectations can be determined ifit is possible to make an absolutely continuous change ofprobability measure, from P N to Q N with L N (X N,Y N ) = dp N /dq N being the Radon Nikodym derivative, so that X N and Y N are independent under Q N. Goggin s main condition guaranteeing the weak convergence ofthe conditional expectation is We thank Thomas G. Kurtz for a productive conversation. Corresponding author. Tel.: addresses: mkouritz@math.ualberta.ca (M.A. Kouritzin), zeng@mendota.umkc.edu (Y. Zeng) X/$ - see front matter 24 Elsevier Ltd. All rights reserved. doi:1.116/j.na
2 232 M.A. Kouritzin, Y. Zeng / Nonlinear Analysis 6 (25) that (X N,Y N,L N (X N,Y N )) under Q N converges in distribution to (X, Y, L(X, Y )) under Q. Under the same conditions ofgoggin [2, we provide a short proofofthe weak convergence ofthis type ofconditional expectation: E QN [F(X N )L N (X N,Y N ) Y N E Q [F (X)L(X, Y ) Y as N, establishing Goggin s theorem. We note that Crimaldi and Pratelli in [7 at about the same time independently proved a similar result. We note that E Q [F (X)L(X, Y ) Y for suitably chosen F solves the Duncan Mortensen Zakai (unnormalized filtering) equation in the classical filtering problem. Hence, weak convergence ofthis type ofconditional expectation can be used in proving the consistency ofsome numerical algorithms for computing Duncan Mortensen Zakai equation. However, this paper focuses on another important application to the statistical inference for a class of partially observed models ofasset price for transaction data recently proposed in [6. The main appeal ofthe class ofmodels is that they are framed as a filtering problem with counting process observations. Then, E Q [F (X)L(X, Y ) Y relates to the continuous-time likelihood (or integrated likelihood in Bayesian statistics) ofthe model, which is specified by the unnormalized filtering equation. The Markov chain approximation method can be applied to develop recursive algorithms based on the unnormalized filtering equation to compute the continuous-time likelihood. Therefore, weak convergence ofthis type ofconditional expectation is useful in proving the consistency of the recursive algorithms for likelihoods. Furthermore, together with the continuous mapping theorem, weak convergence ofthis type ofconditional expectation can also prove the consistency ofthe recursive algorithms for computing posterior and Bayes factor, which is the ratio ofthe integrated likelihoods ofthe two models considered in model selection. Section 2 presents and proves the main theorem with two corollaries. The first corollary is Goggin s theorem and is related to the consistency ofposterior. The second relates to the consistency of Bayes factor. It should be noted that the results in Section 2 are fairly general, because they are for the random variables in the complete, separable metric spaces. To present an application, Section 3 first reviews the class ofmodels proposed in [6, then proves the consistencies ofthe likelihoods, posterior and Bayes factor. 2. The main result Theorem 1. Suppose that S 1 and S 2 are complete, separable metric spaces and that (X N, Y N ), N = 1, 2,..., and (X, Y ) are S 1 S 2 -valued random variables defined on the probability spaces (Ω N, F N,P N ) and (Ω, F,P), respectively. Suppose that {(X N,Y N )} converges in distribution to (X, Y ), that P N >Q N on σ(x N,Y N ) with dp N /dq N = L N (X N,Y N ), and that Q N (Q) is a probability measure on F N (F) such that X N, Y N (X, Y ) are independent under Q N (Q). Suppose that the Q N -distribution of (X N,Y N,L N (X N,Y N )) converges weakly to the Q-distribution of (X, Y, L(X, Y )), where E Q [L(X, Y )=1. Then, the following hold: (i) P >Q on σ(x, Y ) and dp/dq = L(X, Y ); (ii) for every bounded continuous function F : S 1 R, E QN [F(X N )L N (X N,Y N ) Y N converges weakly to E Q [F (X)L(X, Y ) Y as N.
3 M.A. Kouritzin, Y. Zeng / Nonlinear Analysis 6 (25) Proof. Part (i) is from part (i) of Theorem 2.1 in [2. The following is the proof for part (ii). Pick ε >. Then, as in [2, there exists a bounded, continuous function L ε, w.l.o.g. strictly positive, such that E Q [ L(X, Y ) L ε (X, Y ) < ε. Let Z N = L N (X N,Y N ) and Z = L(X, Y ). Using Skorohod s representation, we can find a probability space ( ˆΩ, ˆF, ˆQ) where ( ˆX N, Ŷ N, Ẑ N ) and ( ˆX, Ŷ,Ẑ) are defined with the following three properties: for each N, ( ˆX N, Ŷ N, Ẑ N ) has the same distribution as (X N,Y N,L N (X N,Y N )) under Q N ; ( ˆX, Ŷ,Ẑ)has the same distribution as (X, Y, L(X, Y )) under Q; and ( ˆX N, Ŷ N, Ẑ N ) ( ˆX, Ŷ,Ẑ) ˆQ-a.s. Suppose F C. We observe that [ [ E ˆQ E ˆQ F( ˆX [ N )Ẑ N Ŷ N E ˆQ F( ˆX)Ẑ Ŷ CE ˆQ [ Ẑ L ε ( ˆX, Ŷ) + CE ˆQ [ Ẑ N L ε ( ˆX N, Ŷ N ) + E ˆQ [ E ˆQ [ F( ˆX N )L ε ( ˆX N, Ŷ N ) Ŷ N E ˆQ [ F( ˆX)L ε ( ˆX, Ŷ) Ŷ. (1) Now, it suffices to prove the right-hand side of the above inequality can be arbitrary small for large enough N. The first term is less than Cε. For the second term, we observe [ E ˆQ L ε ( ˆX [ N, Ŷ N ) Ẑ N E ˆQ L ε ( ˆX N, Ŷ N ) L ε ( ˆX, [ Ŷ) + E ˆQ L ε ( ˆX, [ Ŷ) Ẑ + E ˆQ Ẑ Ẑ N. Since ( ˆX N, Ŷ N ) ( ˆX, Ŷ) ( ˆQ-a.s., and L ε is bounded and continuous, the first expectation is less than ε for large N. The second expectation is less than ε by the assumption on L ε. Since Ẑ N Ẑ ˆQ-a.s., and Ẑ N d ˆQ Ẑ d ˆQ (since all the integrals are identically 1), the last expectation is less than ε for large N. Therefore, the second term of(1) is less than 3Cε. Noting that both F and L ε are bounded and continuous real variables, one finds that the mapping: (y, μ) F(x)L ε (x, y)μ(dx) is continuous. Therefore, noting that (Ŷ N, μ N ) (Ŷ,μ), where μ N = ˆQ 1 ( ˆX N ) and μ = ˆQ 1 ( ˆX), we obtain that the third term of(1) is less than ε for large N. Theorem 1 relates to the consistency oflikelihood in the application ofsection 3. The following corollary is Goggin Theorem and relates to the consistency of posterior in the application ofsection 3. We use the notation, X N X (X ε X), to mean X N (X ε ) converges weakly to X in the Skorohod topology as N (ε ). Corollary 1. Under the same conditions of Theorem 1 and the assumption that Q{E Q [L (X, Y ) Y =} =, for every bounded continuous function F : S 1 R, we have: E P N [F(X N ) Y N converges weakly to E P [F(X) Y as N. Proof. Bayes theorem, Theorem 1 and continuous mapping theorem imply that E P N [F(X N ) Y N = EQN [F(X N )L N (X N,Y N ) Y N E QN [L N (X N,Y N ) Y N = E P [F(X) Y. EQ [F (X)L(X, Y ) Y E Q [L(X, Y ) Y
4 234 M.A. Kouritzin, Y. Zeng / Nonlinear Analysis 6 (25) Corollary 2 relates to the consistency ofbayes factor in the application ofsection 3. Corollary 2. Suppose that for k=1, 2, S 1 and S 2 are complete, separable metric spaces, and that (X N k,yn k ) ((X k,y k )) on (Ω N k, FN k,pn k ) ((Ω k, F k,p k )) satisfies the same conditions of Theorem 1. Suppose that Q k {E Q k[l k (X k,y k ) Y k =}= for k = 1, 2. Define and q N 1 (F 1) = EQN 1 [F 1 (X N 1 )LN 1 (XN 1,YN 1 ) Y N 1 E QN 2 [L N 2 (XN 2,YN 2 ) Y N 2 q N 2 (F 2) = EQN 2 [F 2 (X N 2 )LN 2 (XN 2,YN 2 ) Y N 2 E QN 1 [L N 1 (XN 1,YN 1 ) Y N 1. Define q 1 (F 1 ) and q 2 (F 2 ) similarly. Then, for every bounded continuous function F k : S 1 R, (q1 N (F 1), q2 N (F 2)) converges weakly to (q 1 (F 1 ), q 2 (F 2 )). Proof. By the proofoftheorem 1, ( [ E QN 1 F1 (X1 N )LN 1 (XN 1,YN 1 ) Y 1 N,E Q N [ 2 F2 (X2 N )LN 2 (X2 N,YN 2 ) Y 2 N ) ( [ [ ) E Q 1 F1 (X 1 )L 1 (X 1,Y 1 ) Y 1,E Q 2 F2 (X 2 )L 2 (X 2,Y 2 ) Y 2, and the continuous mapping theorem implies that [ E QN 1 [F 1 (X q1 N 1 N )LN 1 (XN 1,Y 1 N ) Y 1 N (F E 1) q2 N (F = E QN 2 [L N 2 2) (XN 2,Y 2 N ) Y 2 N Q 1 [F 1 (X 1 )L 1 (X 1,Y 1 ) Y 1 E Q 2 [L 2 (X 2,Y 2 ) Y 2 = E QN 2 [F 2 (X N 2 )LN 2 (XN 2,Y N 2 ) Y N 2 E QN 1 [L N 1 (XN 1,Y N 1 ) Y N 1 [ q1 (F 1 ). q 2 (F 2 ) E Q 2 [F 2 (X 2 )L 2 (X 2,Y 2 ) Y 2 E Q 1 [L 1 (X 1,Y 1 ) Y 1 3. Application to a class of asset price models In this section, we first review the class ofpartially-observed models for transactional asset price in [6 and their continuous-time likelihoods, posterior and Bayes factors. Then, we apply the theorem and the two corollaries in Section 2 to prove the consistency ofthe likelihoods, posterior and Bayes factors The class of models In trade-by-trade level, the asset price does not move continuously as the common asset price models such as geometric Brownian motion or jump-diffusion processes suggest, but move level-by-level due to price discreteness. When we view the prices according to price level, we model the prices ofan asset as a collection ofcounting processes in the
5 following form: M.A. Kouritzin, Y. Zeng / Nonlinear Analysis 6 (25) N 1 ( λ 1(θ(s), X(s), s) ds) N 2 ( Y(t)= λ 2(θ(s), X(s), s) ds). N n (, (2) t λ n(θ(s), X(s), s) ds) where Y j (t)=n j ( λ j (θ(s), X(s), s) ds) is the counting process recording the cumulative number oftrades that have occurred at the jth price level (denoted by y j ) up to time t. Moreover, θ(t) is a vector ofparameters in the model and X(t) is the intrinsic value process, which cannot be observed directly, but can be partially observed through Y. Suppose that P is the probability measure of (θ,x, Y). We invoke five mild assumptions on the model. Assumption 1. {N j } n j=1 are unit Poisson processes under measure P. Assumption 2. (θ, X), N 1,N 2,...,N n are independent under measure P. Assumption 3. The total intensity process, a(θ,x,t), is uniformly bounded above, namely, there exists a positive constant, C, such that <a(θ,x,t) C for all t> and all (θ,x). Remark 1. Assumption 3 is for technical reasons. These three assumptions imply that there exists a reference measure Q and that after a suitable change of measure to Q, (θ, X), Y 1,...,Y n become independent, and Y 1,Y 2,...,Y n become unit Poisson processes. Also, the independence in Assumption 2 implies the probability to have any simultaneous jumps is zero. Assumption 4. The intensity, λ j (θ,x,t) = a(θ,x,t)p(y j x), where a(θ,x,t) is the total trading intensity at time t and p(y j x) is the transition probability from x to y j, the jth price level. Remark 2. This assumption imposes a desirable structure for the intensities of the model. It means that the total trading intensity a(θ(t), X(t), t) determines the overall rate oftrade occurrence at time t and p(y j x) determines the proportional intensity oftrade at the price level, y j, when the value is x. Note that p(y j x) models how the trading noise enters the price process. Assumption 5. (θ,x)is the unique cadlag solution ofa martingale problem for a generator A such that M f (t) = f(θ(t), X(t)) Af(θ(s), X(s)) ds is a F θ,x t -martingale for f D(A), where F θ,x t X(s)) s t. is the σ-algebra generated by (θ(s),
6 236 M.A. Kouritzin, Y. Zeng / Nonlinear Analysis 6 (25) Remark 3. The martingale problem and the generator approach (see [1) provide a powerful tool for the characterization of Markov processes. Assumption 5 includes all relevant stochastic processes such as diffusion and jump-diffusion processes. Remark 4. Under this representation, (θ(t), X(t)) becomes the signal process, which cannot be observed directly, but can be partially observed through the counting processes, Y(t), corrupted by trading noise, which is modeled by p(y j x) (see [6 for more about trading noise). Hence, (θ,x, Y) is framed as a filtering problem with counting process observations Foundations of statistical inference The continuous-time joint likelihood The probability measure P of (θ,x, Y) can be written as P = P θ,x P y θ,x, where P θ,x is the probability measure for (θ,x) such that M f (t) in Assumption 5 is a F θ,x t - martingale, and P y θ,x is the conditional probability measure for Y given (θ,x). Under P, Y depends on (θ,x). Recall from Remark 1 that there exists a reference measure Q such that under Q, (θ,x)and Y become independent, and Y 1,Y 2,...,Y n become unit Poisson processes. Therefore, Q can be decomposed as Q = P θ,x Q y, where Q y is the product probability measure for n independent unit Poisson processes. Then, the Radon Nikodym derivative ofthe model, L(t), that is the continuous-time joint likelihood of (θ,x, Y), is, L(t) = dp dq (t) = dp θ,x (t) dp y θ,x (t) = dp y θ,x (t) dp θ,x dq y dq y n { = exp log λ k (θ(s ), X(s ), s ) dy k (s) k=1 [λ k (θ(s), X(s), s) 1 ds }. (3) The continuous-time likelihoods of Y However, X cannot be observed. To obtain the integrated likelihood ofthe model, which is the marginal likelihood of Y, we may integrate L(t) on (θ,x), or equivalently, write the integrated likelihood in terms ofconditional expectation. Let F Y t = σ{( Y(s)) s t} be the available information up to time t. Definition 1. Let (f, t) = E Q [f(θ(t), X(t))L(t) F Y t. If (θ(), X()) is fixed, then the likelihood of Y is E Q [L(t) F Y = (1,t). Ifa prior is assumed on (θ(), X()), then the integrated (or marginal) likelihood of Y is also (1,t).
7 The continuous-time posterior M.A. Kouritzin, Y. Zeng / Nonlinear Analysis 6 (25) Definition 2. Let π t be the conditional distribution of (θ(t), X(t)) given F Y t. Definition 3. π(f, t) = E P [f(θ(t), X(t)) F Y t = f(θ,x)π t (dθ, dx). Ifa prior is assumed on (θ(), X()), then π t becomes the continuous-time posterior, which is determined by π(f, t) for all continuous and bounded f Continuous-time Bayes factors For the class ofmicromovement models, we suppose that Model k is denoted by (θ,x, Y ) for k =1, 2. Denote the joint likelihood of (θ,x, Y ) by L (t), which is given by Eq. (3). Denote k (f k,t)= E Q [f k (θ (t), X (t))l Y (t) F t. Then, the integrated likelihood of Y is k (1,t), for Model k. In general, the Bayes factor of Model 2 over Model 1, B 21, is defined as the ratio of integrated likelihoods ofmodel 2 over Model 1, i.e. B 21 (t) = 2 (1,t)/ 1 (1,t). Suppose B 21 has been calculated. Then, we can interpret it using the table furnished by Kass and Raftery [3 (a survey paper ofbayes factor) as guideline. Similarly, we can define B 12. Obviously, B 12 B 21 = 1. Definition 4. Define the filter ratio processes: q 1 (f 1,t)= 1 (f 1,t) 2 (1,t), and q 2(f 2,t)= 2 (f 2,t) 1 (1,t). Remark 5. Observe that the Bayes factors, B 12 (t) = q 1 (1,t)and B 21 (t) = q 2 (1,t). In summary, (f, t), which determines the likelihood or integrated likelihood, is characterized by the unnormalized filtering equation. π(f, t), which determines the posterior, is characterized by the normalized filtering equation, and is the optimum filter in the sense of least mean square error. q k (f k,t), k = 1, 2, which determines the Bayes factors, is characterized by the system ofevolution equations. The filtering equations for (f, t) and π(f, t) are derived in [6 and the system ofevolution equations for q k (f k,t), k = 1, 2, is derived in [ Consistency of the likelihoods, posterior and Bayes factors To compute them for statistical inference, one constructs recursive algorithms to approximate them. Zeng [6,4 applied Markov chain approximation methods to construct recursive algorithms for computing the posterior and Bayes estimates (the Bayes factors). One basic requirement for the recursive algorithms is consistency: The approximate versions (i.e. the likelihoods, posterior and Bayes factors), computed by the recursive algorithms, must converge to the true ones. The following theorem proved by Theorem 1 and Corollaries 1 and 2 provides the theoretical foundation for consistency.
8 238 M.A. Kouritzin, Y. Zeng / Nonlinear Analysis 6 (25) Let (θ ε Y,X ε ) be an approximation of (θ,x ). Then, we define N 1 ( λ 1(θ ε (s), X ε (s), s) ds) N 2 ( ε (t) = λ 2(θ ε (s), X ε (s), s) ds). N n (, (4) t λ n(θ ε (s), X ε (s), s) ds) Y set F (( ) ) ε t = σ( Y ε (s), s t), and take L ε (t) = L θ ε (s), X ε (s), Y ε (s). s t Suppose that (θ ε,x ε, Y ε ) lives on (Ω ε, F ε,p ε ), with Assumptions 1 5. Then, there also exists a reference measure Q ε with similar properties. Next, we define the approximate versions that the recursive algorithms compute. Definition 5. For k = 1, 2, let ε,k (f k,t)= E Q ε ( ) [f k π ε,k (f k,t)= E P ε θ ε (t), X ε x (t) q ε,2 (f 2,t)= ε,2 (f 2,t)/ ε,1 (1,t). [f k ( ), θ ε (t), X ε x (t) L Y ε (t) F ε t Y F ε t, q ε,1 (f 1,t)= ε,1 (f 1,t)/ ε,2 (1,t)and Theorem 2. Suppose that Assumptions 1 5 hold for the models (θ,x, Y ) k=1,2 and that Assumptions 1 5 hold for the approximate models (θ ε,x ε, Y ε ) k=1,2. Suppose (θ ε,x ε ) (θ,x ) as ε. Then, as ε, for k = 1,2, (i) Y ε Y ; (ii) ε,k (f, t) k (f, t); (iii) π ε,k (f, t) π k (f, t); (iv) (q ε,1 (f 1, t), q ε,2 (f 2,t)) (q 1 (f 1, t), q 2 (f 2,t))for all bounded continuous functions, f, f 1, and f 2, and t>. Proof. To simplify notation, we exclude the superscript,. Since (θ ε,x ε ) (θ,x), continuous mapping theorem implies that the stochastic intensities of Y ε converges weakly to that of Y. This implies Y ε Y. Note that Eq. (3) implies that Q k {E Q k[l k (X k,y k ) Y k = } = f or k = 1, 2. To show parts (ii) (iv), since (by Remark 1) under the reference measure Q ε, (θ ε,x ε ) and Y ε are independent, it suffices to show ((θ ε,x ε ), Y ε,l ε ) ((θ,x), Y,L) under the reference measures. The Radon Nikodym derivative dp ε /dq ε is L ε (t) = n j=1 { exp log λ j (θ ε (s ), X ε (s ), s ) dy ε,j (s) [λ j (θ ε (s), X ε (s), s) 1 ds }.
9 M.A. Kouritzin, Y. Zeng / Nonlinear Analysis 6 (25) Noting (θ ε,x ε, Y ε ) (θ,x, Y) under the reference measures, one finds that continuous mapping theorem and Kurtz and Protter s Theorem [5 imply that log λ j (θ ε (s ), X ε s ), s ) dy ε,j (s) [λ j (θ ε (s), X ε (s), s) 1 ds log λ j (θ(s ), X(s ), s ) dy k (s), [λ j (θ(s), X(s), s) 1 ds, and ((θ ε,x ε ), Y ε,l ε ) ((θ,x), Y,L) under the reference measures (condition C2.2(i) ofkurtz and Protter [5 holds in our case, see Example 3.3 there,). Hence, Theorem 1 and Corollaries 1 and 2 imply (ii) (iv), respectively. References [1 S. Ethier, T. Kurtz, Markov Processes: Characterization and Convergence, Wiley, New York, [2 E. Goggin, Convergence in distribution ofconditional expectations, Ann. Probab. 22 (1994) [3 R.E. Kass, A.E. Raftery, Bayes factors and model uncertainty, J. Am. Stat. Assoc. 9 (1995) [4 M. Kouritzin, Y. Zeng, Bayesian model selection via filtering for a class ofmicro-movement models ofasset price. International Journal oftheoretical and Applied Finance, submitted. [5 T. Kurtz, P. Protter, Weak limit theorems for stochastic integrals and stochastic differential equations, Ann. Probab. 19 (1991) [6 Y. Zeng, A partially-observed model for micro-movement of asset prices with bayes estimation via filtering, Math. Financ. 13 (23) [7 I. Crimaldi, L. Pratelli, Convergence results for conditional expectations, to appear in Bernoulli.
A PARTIALLY OBSERVED MODEL FOR MICROMOVEMENT OF ASSET PRICES WITH BAYES ESTIMATION VIA FILTERING
Mathematical Finance, Vol. 13, No. 3 (July 23), 411 444 A PARTIALLY OBSERVED MODEL FOR MICROMOVEMENT OF ASSET PRICES WITH BAYES ESTIMATION VIA FILTERING YONG ZENG Department of Mathematics and Statistics,
More informationFiltering with Counting Process Observations: Application to the Statistical Analysis of the Micromovement of Asset Price
Filtering with Counting Process Observations: Application to the Statistical Analysis of the Micromovement of Asset Price Laurie C. Scott Yong Zeng December 4, 28 Abstract This chapter surveys the recent
More informationfor all f satisfying E[ f(x) ] <.
. Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if
More informationBranching Processes II: Convergence of critical branching to Feller s CSB
Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationAn almost sure invariance principle for additive functionals of Markov chains
Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department
More informationLecture 6: Bayesian Inference in SDE Models
Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationExponential martingales: uniform integrability results and applications to point processes
Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda
More informationLAN property for ergodic jump-diffusion processes with discrete observations
LAN property for ergodic jump-diffusion processes with discrete observations Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Arturo Kohatsu-Higa (Ritsumeikan University, Japan) &
More informationBayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2
How Random Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 1 Center for Complex Systems Research and Department of Physics University
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More information7 Poisson random measures
Advanced Probability M03) 48 7 Poisson random measures 71 Construction and basic properties For λ 0, ) we say that a random variable X in Z + is Poisson of parameter λ and write X Poiλ) if PX n) e λ λ
More informationA Class of Fractional Stochastic Differential Equations
Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,
More informationMinimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization
Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date
More informationMutual Information for Stochastic Differential Equations*
INFORMATION AND CONTROL 19, 265--271 (1971) Mutual Information for Stochastic Differential Equations* TYRONE E. DUNCAN Department of Computer, Information and Control Engineering, College of Engineering,
More informationTranslation Invariant Experiments with Independent Increments
Translation Invariant Statistical Experiments with Independent Increments (joint work with Nino Kordzakhia and Alex Novikov Steklov Mathematical Institute St.Petersburg, June 10, 2013 Outline 1 Introduction
More informationFast-slow systems with chaotic noise
Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 12, 215 Averaging and homogenization workshop, Luminy. Fast-slow systems
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian
More informationLARGE DEVIATIONS FOR STOCHASTIC PROCESSES
LARGE DEVIATIONS FOR STOCHASTIC PROCESSES By Stefan Adams Abstract: The notes are devoted to results on large deviations for sequences of Markov processes following closely the book by Feng and Kurtz ([FK06]).
More informationA new iterated filtering algorithm
A new iterated filtering algorithm Edward Ionides University of Michigan, Ann Arbor ionides@umich.edu Statistics and Nonlinear Dynamics in Biology and Medicine Thursday July 31, 2014 Overview 1 Introduction
More informationFinancial Econometrics
Financial Econometrics Estimation and Inference Gerald P. Dwyer Trinity College, Dublin January 2013 Who am I? Visiting Professor and BB&T Scholar at Clemson University Federal Reserve Bank of Atlanta
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationWeb Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.
Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we
More informationDynamic Risk Measures and Nonlinear Expectations with Markov Chain noise
Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University
More informationStrong Lens Modeling (II): Statistical Methods
Strong Lens Modeling (II): Statistical Methods Chuck Keeton Rutgers, the State University of New Jersey Probability theory multiple random variables, a and b joint distribution p(a, b) conditional distribution
More informationExercises Measure Theoretic Probability
Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary
More informationMalliavin Calculus in Finance
Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationUniversal examples. Chapter The Bernoulli process
Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationTHE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON
GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied
More informationFast-slow systems with chaotic noise
Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David
More informationLikelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN
INFORMATION AND CONTROL 16, 303-310 (1970) Likelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN Computer, Information and Control Engineering, The University of Nlichigan, Ann Arbor,
More informationSTAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01
STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist
More informationWeak solutions of mean-field stochastic differential equations
Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More information1 Bayesian Linear Regression (BLR)
Statistical Techniques in Robotics (STR, S15) Lecture#10 (Wednesday, February 11) Lecturer: Byron Boots Gaussian Properties, Bayesian Linear Regression 1 Bayesian Linear Regression (BLR) In linear regression,
More informationx log x, which is strictly convex, and use Jensen s Inequality:
2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationGeneralized Gaussian Bridges of Prediction-Invertible Processes
Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine
More informationAlternative Characterizations of Markov Processes
Chapter 10 Alternative Characterizations of Markov Processes This lecture introduces two ways of characterizing Markov processes other than through their transition probabilities. Section 10.1 describes
More informationMultiple Random Variables
Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x
More informationMATH 6605: SUMMARY LECTURE NOTES
MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic
More informationA Barrier Version of the Russian Option
A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationWhat happens when there are many agents? Threre are two problems:
Moral Hazard in Teams What happens when there are many agents? Threre are two problems: i) If many agents produce a joint output x, how does one assign the output? There is a free rider problem here as
More informationNonparametric Drift Estimation for Stochastic Differential Equations
Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,
More information1. Introduction Systems evolving on widely separated time-scales represent a challenge for numerical simulations. A generic example is
COMM. MATH. SCI. Vol. 1, No. 2, pp. 385 391 c 23 International Press FAST COMMUNICATIONS NUMERICAL TECHNIQUES FOR MULTI-SCALE DYNAMICAL SYSTEMS WITH STOCHASTIC EFFECTS ERIC VANDEN-EIJNDEN Abstract. Numerical
More informationICES REPORT Model Misspecification and Plausibility
ICES REPORT 14-21 August 2014 Model Misspecification and Plausibility by Kathryn Farrell and J. Tinsley Odena The Institute for Computational Engineering and Sciences The University of Texas at Austin
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationRegular Variation and Extreme Events for Stochastic Processes
1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for
More informationOn Differentiability of Average Cost in Parameterized Markov Chains
On Differentiability of Average Cost in Parameterized Markov Chains Vijay Konda John N. Tsitsiklis August 30, 2002 1 Overview The purpose of this appendix is to prove Theorem 4.6 in 5 and establish various
More informationStable Lévy motion with values in the Skorokhod space: construction and approximation
Stable Lévy motion with values in the Skorokhod space: construction and approximation arxiv:1809.02103v1 [math.pr] 6 Sep 2018 Raluca M. Balan Becem Saidani September 5, 2018 Abstract In this article, we
More informationAn application of hidden Markov models to asset allocation problems
Finance Stochast. 1, 229 238 (1997) c Springer-Verlag 1997 An application of hidden Markov models to asset allocation problems Robert J. Elliott 1, John van der Hoek 2 1 Department of Mathematical Sciences,
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationInfer relationships among three species: Outgroup:
Infer relationships among three species: Outgroup: Three possible trees (topologies): A C B A B C Model probability 1.0 Prior distribution Data (observations) probability 1.0 Posterior distribution Bayes
More informationStat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016
Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016 1 Theory This section explains the theory of conjugate priors for exponential families of distributions,
More informationMarkov Chains and Hidden Markov Models
Chapter 1 Markov Chains and Hidden Markov Models In this chapter, we will introduce the concept of Markov chains, and show how Markov chains can be used to model signals using structures such as hidden
More informationBayesian RL Seminar. Chris Mansley September 9, 2008
Bayesian RL Seminar Chris Mansley September 9, 2008 Bayes Basic Probability One of the basic principles of probability theory, the chain rule, will allow us to derive most of the background material in
More informationADVANCED PROBABILITY: SOLUTIONS TO SHEET 1
ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y
More informationConvergence of Markov Processes. Amanda Turner University of Cambridge
Convergence of Markov Processes Amanda Turner University of Cambridge 1 Contents 1 Introduction 2 2 The Space D E [, 3 2.1 The Skorohod Topology................................ 3 3 Convergence of Probability
More informationComment on Weak Convergence to a Matrix Stochastic Integral with Stable Processes
Comment on Weak Convergence to a Matrix Stochastic Integral with Stable Processes Vygantas Paulauskas Department of Mathematics and Informatics, Vilnius University Naugarduko 24, 03225 Vilnius, Lithuania
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationIntroduction to Information Entropy Adapted from Papoulis (1991)
Introduction to Information Entropy Adapted from Papoulis (1991) Federico Lombardo Papoulis, A., Probability, Random Variables and Stochastic Processes, 3rd edition, McGraw ill, 1991. 1 1. INTRODUCTION
More informationOptimal filtering and the dual process
Optimal filtering and the dual process Omiros Papaspiliopoulos ICREA & Universitat Pompeu Fabra Joint work with Matteo Ruggiero Collegio Carlo Alberto & University of Turin The filtering problem X t0 X
More informationSTAT Sample Problem: General Asymptotic Results
STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative
More informationMan Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***
JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationStochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik
Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationInferring biological dynamics Iterated filtering (IF)
Inferring biological dynamics 101 3. Iterated filtering (IF) IF originated in 2006 [6]. For plug-and-play likelihood-based inference on POMP models, there are not many alternatives. Directly estimating
More informationIntroduction to self-similar growth-fragmentations
Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationLAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC
LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC R. G. DOLGOARSHINNYKH Abstract. We establish law of large numbers for SIRS stochastic epidemic processes: as the population size increases the paths of SIRS epidemic
More informationRandom Process Lecture 1. Fundamentals of Probability
Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus
More informationLecture 1: Brief Review on Stochastic Processes
Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.
More informationOn Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA
On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous
More informationHands-On Learning Theory Fall 2016, Lecture 3
Hands-On Learning Theory Fall 016, Lecture 3 Jean Honorio jhonorio@purdue.edu 1 Information Theory First, we provide some information theory background. Definition 3.1 (Entropy). The entropy of a discrete
More informationNumerical Approximations to Optimal Nonlinear Filters
Numerical Approximations to Optimal Nonlinear Filters Harold J.Kushner Brown University, Applied Mathematics January, 28 Abstract Two types of numerical algorithms for nonlinear filters are considered.
More informationTHE ASYMPTOTIC ELASTICITY OF UTILITY FUNCTIONS AND OPTIMAL INVESTMENT IN INCOMPLETE MARKETS 1
The Annals of Applied Probability 1999, Vol. 9, No. 3, 94 95 THE ASYMPTOTIC ELASTICITY OF UTILITY FUNCTIONS AND OPTIMAL INVESTMENT IN INCOMPLETE MARKETS 1 By D. Kramkov 2 and W. Schachermayer Steklov Mathematical
More informationKrzysztof Burdzy Robert Ho lyst Peter March
A FLEMING-VIOT PARTICLE REPRESENTATION OF THE DIRICHLET LAPLACIAN Krzysztof Burdzy Robert Ho lyst Peter March Abstract: We consider a model with a large number N of particles which move according to independent
More informationTowards a Bayesian model for Cyber Security
Towards a Bayesian model for Cyber Security Mark Briers (mbriers@turing.ac.uk) Joint work with Henry Clausen and Prof. Niall Adams (Imperial College London) 27 September 2017 The Alan Turing Institute
More informationσ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =
Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,
More informationThe University of Auckland Applied Mathematics Bayesian Methods for Inverse Problems : why and how Colin Fox Tiangang Cui, Mike O Sullivan (Auckland),
The University of Auckland Applied Mathematics Bayesian Methods for Inverse Problems : why and how Colin Fox Tiangang Cui, Mike O Sullivan (Auckland), Geoff Nicholls (Statistics, Oxford) fox@math.auckland.ac.nz
More informationMeasure-theoretic probability
Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is
More informationLearning Bayesian network : Given structure and completely observed data
Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution
More informationChapter 1. Statistical Spaces
Chapter 1 Statistical Spaces Mathematical statistics is a science that studies the statistical regularity of random phenomena, essentially by some observation values of random variable (r.v.) X. Sometimes
More informationStochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno
Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.
More informationData assimilation with and without a model
Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationExercises. T 2T. e ita φ(t)dt.
Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.
More informationLecture 4: State Estimation in Hidden Markov Models (cont.)
EE378A Statistical Signal Processing Lecture 4-04/13/2017 Lecture 4: State Estimation in Hidden Markov Models (cont.) Lecturer: Tsachy Weissman Scribe: David Wugofski In this lecture we build on previous
More informationMean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral
Mean field simulation for Monte Carlo integration Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More information