X n = c n + c n,k Y k, (4.2)

Size: px
Start display at page:

Download "X n = c n + c n,k Y k, (4.2)"

Transcription

1 4. Linear filtering. Wiener filter Assume (X n, Y n is a pair of random sequences in which the first component X n is referred a signal and the second an observation. The paths of the signal are unobservable and, on the contrary, the paths of observation Y n are observable (measurable. Filtering is a procedure of estimating the signal value X n by a past of the observation Y n, Y n 1,..., Y n m := Yn m n (traditionally, m = n or m =. Denote X n a filtering estimate which is a function of observation, that is X n = X n (Yn m. n Here and now, we restrict ourselves by a consideration of linear estimates (shortly - linear filters and shall evaluate a quality of filtering by the mean square error P n = E(X n X n 2 for every time parameter n. Obviously, such a setting requires the existence of second moments for the signal and observation EXn 2 <, EYn 2 < for all n and gives the optimal estimate in the form of conditional expectation in the wide sense X n = Ê(X n Y n, (4.1 where Y n is a linear space generated by 1, Y n, Y n 1,..., (henceforth, X n is assumed to be defined by (4.1. A filtering problem for random processes consists in an estimating of an unobservable signal (random process via paths of an observable one The Wiener filter. Assume that signa-observation pair (X n, Y n is a stationary random sequence in the wide sense. To have X n also a stationary sequence, it is assumed that m =, that is the path Y n measurable. Since X n Y n it is defined as: X n = c n + c n,k Y k, (4.2 where the sum in (4.2 converges in L 2 -norm and parameters c n, c n,k s are chosen such that the error X n X n is orthogonal to Y n. Without loss of a generality we may assume that both the signal and observation are zero mean and so taking the expectation from both sides of (4.2 we find that c n 0. In spite of a general formula for Ê(X n Y n is given in Theorem 3.1 (Lect. 3, its application to the case considered is problematic owing to infinite dimensional of Y n (one could apply that formula only if we are ready to be satisfied by an approximation of X n being the orthogonal projection on the linear space Y [n m,n] generated by 1, Y n, Y n 1,..., Y n m for large m. Set R yy (k l = EY k Y l and R xy (k l = EX k Y l. 1

2 Theorem 4.1. The optimal coefficients, c n,k c n k, and solves an infinite system of linear algebraic equation (Wiener-Hopf equation R xy (n l = c n k R y,y (k l. (4.3 Proof. Since the filtering error X n X n is orthogonal to Y n, for any Y l, l n we have E ( X n X n Y n Yl = 0. Taking into consideration (4.2, with c n 0, from the latter equality we derive an infinite system of linear equations R xy (n l = c n,k R y,y (k l. (4.4 So it remains to prove only that c n,k = c n k. Notice that l = n, (4.4 is transformed to l 0 R xy (0 = c l,k R y,y (k l = c l,l j R y,y (j. j= Since the left side of this equality is independent of l, the right one has to be independent of l as well, what provides c l,l j = c l (l j = c j. Example. Clearly, a solution of the Wiener-Hopf equation would be difficult to find directly. So, we give here a simple example for which solution for Wiener-Hopf equation is found. Let (X n, Y n be defined as follows: for any n > X n = ax n 1 + ε n Y n = X n + δ n, where (ε n, (δ n are orthogonal white noises, that is any linear combinations of (ε n and (δ n are uncorrelated, with Eε n 0, Eε 2 n 1 and Eδ n 0, Eδ 2 n 1. An absolute value of the parameter a is smaller than 1, a < 1, what guarantees that is the recursion for X n is stable and X n is a stationary process. Parallel to X n, let us introduce X n,n 1 = E(X n Y n 1 and Ŷ n,n 1 = E(Y n Y n 1 the orthogonal projections of X n and Y n on Y n 1 the subspace of Y n and denote also by Var n,n 1 (X = E ( X n X n,n 1 2 Var n,n 1 (X, Y = E ( X n X n,n 1 ( Y n Ŷn,n 1 It is obvious that X n,n 1 = E(X n Y n 1 = E(aX n 1 + ε n Y n 1 = a X n 1 2

3 and similarly Ŷ n,n 1 = E(X n + δ n Y n 1 = a X n 1. These relations allow us to find explicitly that and that Var n,n 1 (X = E ( 2 X n aŷn 1 = E ( a(x n 1 a X n 1 + ε n 2 = a 2 P n (4.5 Var n,n 1 (Y = E ( Y n aŷn 1 2 = E ( a(x n 1 a X n 1 + ε n + δ n 2 = a 2 P n (4.6 Cov n,n 1 (X, Y = E ( X n a X n 1 ( Yn a X n 1 = E ( a(x n 1 a X n 1 + ε n ( a(xn 1 a X n 1 + ε n + δ n = a 2 P n (4.7 We show now that X n = a X n 1 + a2 P n (Y a 2 n a P n X n 1. (4.8 The proof of (4.8 looks like the proof of Theorem 3.1 (Lect.3. Set η = X n X n,n 1 + C(Y n Ŷn,n 1 (4.9 and choose the constant C such that η Y n, that is Eηξ = 0 for any ξ Y n. With ξ = Y n Ŷn,n 1, we have 0 = Cov n,n 1 (X, Y + CVar n,n+1 (Y, i.e. C is uniquely defined: C = a2 P n 1 +1 a 2 P n 1 +2 and (4.8 is provided by (4.9. Notice also that the mean square error P n P, since (X n, X n forms stationary process. Hence, the recursion, given in (4.8, is transformed to ( X n = a a a2 P + 1 a 2 P + 2 Xn 1 + a2 P + 1 a 2 P + 2 Y n. (4.10 We have to find now the number P. Notice that η = X n X n, so that P = Eη 2 and from (4.9 it follows P = a 2 P + 1 (a2 P a 2 P + 2 = a2 P + 1 a 2 P + 2 < 1 and, then (4.10 looks as: X n = a(1 P X n 1 + P Y n. (4.11 3

4 Emphasizing that a(1 P < 1 we derive from (4.11 that ( n kp X n = a(1 P Yk, so that c j ( a(1 P jp. Generalization of the example. (X n, Y n, treated as the signal and observation, is zero mean stationary in wide sense process being part of entries for Z n the stationary Markov process in the wide sense: Z n = AZ n 1 + ξ n, (4.12 where A is a matrix eigenvalues of which have negative real parts and (ξ n is a vector-valued white noise type sequence; the latter means that for some nonnegative definite matrix B { Eξ k ξl T B, k = l = 0 otherwise. For notational convenience, assume that X n is the first entry of Z n and Y n the last one. Also introduce Z n the subvector of Z n containing all entries excluding Y n. Let for a definiteness, Z n be the vector(column ( Z of the size p. Since Z n = n, we rewrite (4.12 into the form Y n ( ( ( ( Z n A11 A = 12 Z n 1 ξ + n Y n A 21 A 22 Y n 1 ξ, (4.13 n ( ( A11 A where 12 ξ (= A and n A 21 A 22 ξ (= ξ n ; ξ n contains q 1 first n component of ξ n. Notice also that B = Eξ n(ξ n T, B 12 ξ n(ξ n T, B 21 = B 21, B 22 E(ξ n 2. Denote Ẑ n = Ê(Z n Y n. Under Ẑ n is the stationary process defined recurrently: ( B11 B 12, where B B 21 B B 22 > 0, (4.14 Ẑ n = A 11 Ẑ n 1 + A 12 Y n 1 + B 12 + A 11 P A T ( 12 Yn A B 22 + A 21 P A T 21 Ẑ n 1 A 22 Y n 1 21 (4.15 where P E ( Z n n( Ẑ Z T n n Ẑ solves the algebraic Riccati equation ( ( P = A 11 P A B12 + A 11 P A T 12 B12 + A 11 P A T 11 + B (4.16 B 22 + A 21 P A T 21 4

5 A derivation of (4.15 and (4.16 follows from Theorem 5.1 (Lect. 5. The first coordinate of Ẑ n, obviously, is X n. That estimate possesses a structure X n = n c n k Y k with coefficients c n k calculating in accordance with (4.15 and (4.16. This result shows how intricate might be a method for solving of the Wiener-Hopf equation. There is another way for solving of the Wiener-Hopf equation given in terms spectral densities. For instance, to apply the Fourier transform to both sides of (4.3. It is exposed in others courses. Fixed length of observation. Let m be fixed number and Y [n m,n] be a linear space generated by 1, Y n, Y n 1,..., Y n m. Define X [n m,n] = Ê( X n Y [n m,n] the filtering estimate under fixed length of observation. Denote Y n ( ( Y Yn m n = n 1. and Cov [n m,n] (Y, Y = E Yn m n Y n T ( n m ( Cov [n m,n] (X, Y = E X n Y n T n m Y n m Since (X n, Y n is the stationary sequence, for any number k we have { Cov [n m,n] (Y, Y = Cov [k m,k] (Y, Y Cov [n m,n] (X, Y = Cov [k m,k] (X, Y. The latter property provides, assuming that Cov [n m,n] (Y, Y is a nonsingular matrix, that the vector H m = Cov [n m,n] (X, Y Cov 1 [n m,n](y, Y (4.17 depends only on the number m and so is fixed for any n. Thus, by Theorem 3.1 (Lect. 3 we have (recall that EX n 0 and EYn m n 0 X [n m,n] = H m Yn m. n (4.18 An attractiveness of this type filtering estimate is twofold. First, for fixed m the vector H m is independent of n and so is fixed (computed only once. Second, taking into consideration that for fixed n the family M [n m,n], m 1 increases, i.e. M [n m,n] M [n (m+1,n]... M n, the sequence of random variables X [n m,n], m 1 forms a martingale in the wide sense. Therefore and by EXn 2 <, we have (see Section 3.4 in Lect. 3 lim E( Xn X 2 [n m,n] = 0. m Consequently, for m large enough X n m,n] might be an appropriate approximation for X n. 5

{η : η=linear combination of 1, Z 1,, Z n

{η : η=linear combination of 1, Z 1,, Z n If 3. Orthogonal projection. Conditional expectation in the wide sense Let (X n n be a sequence of random variables with EX n = σ n and EX n 0. EX k X j = { σk, k = j 0, otherwise, (X n n is the sequence

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

ECE534, Spring 2018: Solutions for Problem Set #5

ECE534, Spring 2018: Solutions for Problem Set #5 ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model 5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

Do not copy, quote, or cite without permission LECTURE 4: THE GENERAL LISREL MODEL

Do not copy, quote, or cite without permission LECTURE 4: THE GENERAL LISREL MODEL LECTURE 4: THE GENERAL LISREL MODEL I. QUICK REVIEW OF A LITTLE MATRIX ALGEBRA. II. A SIMPLE RECURSIVE MODEL IN LATENT VARIABLES. III. THE GENERAL LISREL MODEL IN MATRIX FORM. A. SPECIFYING STRUCTURAL

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Introduction to Stochastic Processes. Pavel Chigansky. address:

Introduction to Stochastic Processes. Pavel Chigansky.  address: Introduction to Pavel Chigansky E-mail address: pchiga@mscc.huji.ac.il Preamble These are lecture notes for the course I have taught at the School of Electrical Engineering of Tel Aviv University during

More information

Wiener Filtering. EE264: Lecture 12

Wiener Filtering. EE264: Lecture 12 EE264: Lecture 2 Wiener Filtering In this lecture we will take a different view of filtering. Previously, we have depended on frequency-domain specifications to make some sort of LP/ BP/ HP/ BS filter,

More information

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie WIENER FILTERING Presented by N.Srikanth(Y8104060), M.Manikanta PhaniKumar(Y8104031). INDIAN INSTITUTE OF TECHNOLOGY KANPUR Electrical Engineering dept. INTRODUCTION Noise is present in many situations

More information

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS In many practical situations, multiple random variables are required for analysis than a single random variable. The analysis of two random variables especially

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Random Vectors, Random Matrices, and Matrix Expected Value

Random Vectors, Random Matrices, and Matrix Expected Value Random Vectors, Random Matrices, and Matrix Expected Value James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 16 Random Vectors,

More information

RANDOM PROCESSES. THE FINAL TEST. Prof. R. Liptser and P. Chigansky 9:00-12:00, 18 of October, Student ID:

RANDOM PROCESSES. THE FINAL TEST. Prof. R. Liptser and P. Chigansky 9:00-12:00, 18 of October, Student ID: RANDOM PROCESSES. THE FINAL TEST. Prof. R. Liptser and P. Chigansky 9:00-12:00, 18 of October, 2001 Student ID: any supplementary material is allowed duration of the exam is 3 hours write briefly the main

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

DISCRETE FOURIER TRANSFORM

DISCRETE FOURIER TRANSFORM DISCRETE FOURIER TRANSFORM 1. Introduction The sampled discrete-time fourier transform (DTFT) of a finite length, discrete-time signal is known as the discrete Fourier transform (DFT). The DFT contains

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Measurement Errors and the Kalman Filter: A Unified Exposition

Measurement Errors and the Kalman Filter: A Unified Exposition Luiss Lab of European Economics LLEE Working Document no. 45 Measurement Errors and the Kalman Filter: A Unified Exposition Salvatore Nisticò February 2007 Outputs from LLEE research in progress, as well

More information

Lecture 4: FT Pairs, Random Signals and z-transform

Lecture 4: FT Pairs, Random Signals and z-transform EE518 Digital Signal Processing University of Washington Autumn 2001 Dept. of Electrical Engineering Lecture 4: T Pairs, Rom Signals z-transform Wed., Oct. 10, 2001 Prof: J. Bilmes

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

HST.582J/6.555J/16.456J

HST.582J/6.555J/16.456J Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R

More information

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs Engineering Mathematics 05 SUBJECT NAME : Probability & Random Process SUBJECT CODE : MA6 MATERIAL NAME : University Questions MATERIAL CODE : JM08AM004 REGULATION : R008 UPDATED ON : Nov-Dec 04 (Scan

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied

More information

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed

More information

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I A-AE 567 Final Homework Spring 213 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

Feedback Control of Turbulent Wall Flows

Feedback Control of Turbulent Wall Flows Feedback Control of Turbulent Wall Flows Dipartimento di Ingegneria Aerospaziale Politecnico di Milano Outline Introduction Standard approach Wiener-Hopf approach Conclusions Drag reduction A model problem:

More information

Some Results Concerning Uniqueness of Triangle Sequences

Some Results Concerning Uniqueness of Triangle Sequences Some Results Concerning Uniqueness of Triangle Sequences T. Cheslack-Postava A. Diesl M. Lepinski A. Schuyler August 12 1999 Abstract In this paper we will begin by reviewing the triangle iteration. We

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Widely Linear Estimation with Complex Data

Widely Linear Estimation with Complex Data Widely Linear Estimation with Complex Data Bernard Picinbono, Pascal Chevalier To cite this version: Bernard Picinbono, Pascal Chevalier. Widely Linear Estimation with Complex Data. IEEE Transactions on

More information

z-transforms 21.1 The z-transform Basics of z-transform Theory z-transforms and Difference Equations 36

z-transforms 21.1 The z-transform Basics of z-transform Theory z-transforms and Difference Equations 36 Contents 21 z-transforms 21.1 The z-transform 2 21.2 Basics of z-transform Theory 12 21.3 z-transforms and Difference Equations 36 21.4 Engineering Applications of z-transforms 64 21.5 Sampled Functions

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

ECE 8440 Unit 13 Sec0on Effects of Round- Off Noise in Digital Filters

ECE 8440 Unit 13 Sec0on Effects of Round- Off Noise in Digital Filters ECE 8440 Unit 13 Sec0on 6.9 - Effects of Round- Off Noise in Digital Filters 1 We have already seen that if a wide- sense staonary random signal x(n) is applied as input to a LTI system, the power density

More information

3. WIENER PROCESS. STOCHASTIC ITÔ INTEGRAL

3. WIENER PROCESS. STOCHASTIC ITÔ INTEGRAL 3. WIENER PROCESS. STOCHASTIC ITÔ INTEGRAL 3.1. Wiener process. Main properties. A Wiener process notation W W t t is named in the honor of Prof. Norbert Wiener; other name is the Brownian motion notation

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

MATH 301 INTRO TO ANALYSIS FALL 2016

MATH 301 INTRO TO ANALYSIS FALL 2016 MATH 301 INTRO TO ANALYSIS FALL 016 Homework 04 Professional Problem Consider the recursive sequence defined by x 1 = 3 and +1 = 1 4 for n 1. (a) Prove that ( ) converges. (Hint: show that ( ) is decreasing

More information

Projection Theorem 1

Projection Theorem 1 Projection Theorem 1 Cauchy-Schwarz Inequality Lemma. (Cauchy-Schwarz Inequality) For all x, y in an inner product space, [ xy, ] x y. Equality holds if and only if x y or y θ. Proof. If y θ, the inequality

More information

MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS

MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS P. I. Kitsul Department of Mathematics and Statistics Minnesota State University, Mankato, MN, USA R. S. Liptser Department of Electrical Engeneering

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m A-AE 567 Final Homework Spring 212 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate CONVERGENCE ANALYSIS OF THE LATOUCHE-RAMASWAMI ALGORITHM FOR NULL RECURRENT QUASI-BIRTH-DEATH PROCESSES CHUN-HUA GUO Abstract The minimal nonnegative solution G of the matrix equation G = A 0 + A 1 G +

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Homework If the inverse T 1 of a closed linear operator exists, show that T 1 is a closed linear operator.

Homework If the inverse T 1 of a closed linear operator exists, show that T 1 is a closed linear operator. Homework 3 1 If the inverse T 1 of a closed linear operator exists, show that T 1 is a closed linear operator Solution: Assuming that the inverse of T were defined, then we will have to have that D(T 1

More information

1.2. Indices. Introduction. Prerequisites. Learning Outcomes

1.2. Indices. Introduction. Prerequisites. Learning Outcomes Indices.2 Introduction Indices, or powers, provide a convenient notation when we need to multiply a number by itself several times. In this section we explain how indices are written, and state the rules

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

Stochastic Processes

Stochastic Processes Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space

More information

MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3. (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers.

MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3. (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers. MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3 (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers. (a) Define d : V V + {0} by d(x, y) = 1 ξ j η j 2 j 1 + ξ j η j. Show that

More information

Generalized Burgers equations and Miura Map in nonabelian ring. nonabelian rings as integrable systems.

Generalized Burgers equations and Miura Map in nonabelian ring. nonabelian rings as integrable systems. Generalized Burgers equations and Miura Map in nonabelian rings as integrable systems. Sergey Leble Gdansk University of Technology 05.07.2015 Table of contents 1 Introduction: general remarks 2 Remainders

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Lecture 3 - Tuesday July 5th

Lecture 3 - Tuesday July 5th Lecture 3 - Tuesday July 5th jacques@ucsd.edu Key words: Identities, geometric series, arithmetic series, difference of powers, binomial series Key concepts: Induction, proofs of identities 3. Identities

More information

,... We would like to compare this with the sequence y n = 1 n

,... We would like to compare this with the sequence y n = 1 n Example 2.0 Let (x n ) n= be the sequence given by x n = 2, i.e. n 2, 4, 8, 6,.... We would like to compare this with the sequence = n (which we know converges to zero). We claim that 2 n n, n N. Proof.

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

You must continuously work on this project over the course of four weeks.

You must continuously work on this project over the course of four weeks. The project Five project topics are described below. You should choose one the projects. Maximum of two people per project is allowed. If two people are working on a topic they are expected to do double

More information

Introduction to Nonlinear Filtering. P. Chigansky

Introduction to Nonlinear Filtering. P. Chigansky Introduction to Nonlinear Filtering P. Chigansky Contents Preface 5 Instead of Introduction 7 An example 7 The brief history of the problem 12 Chapter 1. Probability preliminaries 15 1. Probability spaces

More information

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18 Variance reduction p. 1/18 Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 2/18 Example Use simulation to compute I = 1 0 e x dx We

More information

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks. Solutions to Dynamical Systems exam Each question is worth marks [Unseen] Consider the following st order differential equation: dy dt Xy yy 4 a Find and classify all the fixed points of Hence draw the

More information

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall. .1 Limits of Sequences. CHAPTER.1.0. a) True. If converges, then there is an M > 0 such that M. Choose by Archimedes an N N such that N > M/ε. Then n N implies /n M/n M/N < ε. b) False. = n does not converge,

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 2: Least Square Estimation Readings: DDV, Chapter 2 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology February 7, 2011 E. Frazzoli

More information

Ch. 14 Stationary ARMA Process

Ch. 14 Stationary ARMA Process Ch. 14 Stationary ARMA Process A general linear stochastic model is described that suppose a time series to be generated by a linear aggregation of random shock. For practical representation it is desirable

More information

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems. Lecture 2: Eigenvalues, eigenvectors and similarity The single most important concept in matrix theory. German word eigen means proper or characteristic. KTH Signal Processing 1 Magnus Jansson/Emil Björnson

More information

Lecture 4: Least Squares (LS) Estimation

Lecture 4: Least Squares (LS) Estimation ME 233, UC Berkeley, Spring 2014 Xu Chen Lecture 4: Least Squares (LS) Estimation Background and general solution Solution in the Gaussian case Properties Example Big picture general least squares estimation:

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Linear Stochastic Systems: A Geometric Approach to Modeling, Estimation and Identification

Linear Stochastic Systems: A Geometric Approach to Modeling, Estimation and Identification page i Linear Stochastic Systems: A Geometric Approach to Modeling, Estimation and Identification Anders Lindquist and Giorgio Picci January 28, 2007 page i ii Boo page Contents Preface vii 1 Introduction

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

ECE-S Introduction to Digital Signal Processing Lecture 3C Properties of Autocorrelation and Correlation

ECE-S Introduction to Digital Signal Processing Lecture 3C Properties of Autocorrelation and Correlation ECE-S352-701 Introduction to Digital Signal Processing Lecture 3C Properties of Autocorrelation and Correlation Assuming we have two sequences x(n) and y(n) from which we form a linear combination: c(n)

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

180B Lecture Notes, W2011

180B Lecture Notes, W2011 Bruce K. Driver 180B Lecture Notes, W2011 January 11, 2011 File:180Lec.tex Contents Part 180B Notes 0 Course Notation List......................................................................................................................

More information

Random Walk in Periodic Environment

Random Walk in Periodic Environment Tdk paper Random Walk in Periodic Environment István Rédl third year BSc student Supervisor: Bálint Vető Department of Stochastics Institute of Mathematics Budapest University of Technology and Economics

More information