Speaker Tracking and Beamforming

Size: px
Start display at page:

Download "Speaker Tracking and Beamforming"

Transcription

1 Speaker Tracking and Beamforming Dr. John McDonough Spoken Language Systems Saarland University January 13, 2010

2 Introduction Many problems in science and engineering can be formulated in terms of estimating a state based on observations. In this tutorial, we will discuss the Kalman filter, which is one possible solution for this problem. We will briefly mention both the probabilistic and joint probabilistic data association filters, which represent other possible solutions. We will also present the fundamentsl of beamforming, whereby the signals from all sensors in a microphone array are combined to for optimal speech enhancement.

3 Bayesian Filtering x k-2 f k-2 y k-2 x k-1 y k-1 Hidden Process f k-1 x k h k y k Observation Process f k+1 x k+1 y k+1 x k+2 f k+2 y k+2 Figure: State-space model

4 Kalman Filter System and Observation Equations The state model of the Kalman filter is given by x k = F k k 1 x k 1 + u k 1, (1) y k = H k x k + v k, (2) where F k k 1 and H k are the known transition and observation matrices. The noise terms u k and v k in (1 2) are by assumption zero mean, white Gaussian random vector processes with covariance matrices U k = E{u k u T k }, V k = E{v k v T k }. By assumption u k and v k are statistically independent.

5 First Order Markov Process The system model (1) represents a first-order Markov process, which implies that the current state of the system depends solely on the state immediately prior, such that p(x k x 0:k 1 ) = p(x k x k 1 ) k N. (3) The observation model (2) implies that the observations are dependent only on the current system state. p(y k x 0:k, y 1:k 1 ) = p(y k x k ) k N. (4) Our goal is to track the filtering density p(x k y 1:k ) in time. From Bayes rule, we can write p(x k y 1:k ) = p(y 1:k x k) p(x k ). (5) p(y 1:k )

6 Tracking the Filtering Density This filtering density can be tracked sequentially as: prediction: The prior pdf of x k can be obtained from p(x k y 1:k 1 ) = p(x k x k 1 ) p(x k 1 y 1:k 1 ) dx k 1, (6) where the evolution of the state p(x k x k 1 ) is defined by the state equation (1). correction or update: The current observation is folded into the estimate of the filtering density through an invocation of Bayes rule: p(x k y 1:k ) = p(y k x k) p(x k y 1:k 1 ) p(y k y 1:k 1 ) = p(x k, y k y 1:k 1 ), (7) p(y k y 1:k 1 ) where the normalization constant is given by p(y k y 1:k 1 ) = p(y k x k ) p(x k y 1:k 1 ) dx k.

7 Innovations Sequence Let y 1:k 1 denote all past observations up to time k 1, and let ŷ k k 1 denote the MMSE estimate, ŷ k k 1 E{y k y 1:k 1 }. By definition, the innovation is the difference s k y k ŷ k k 1 (8) between the actual and the predicted observations. The innovation contains the new information required for sequentially updating the filtering density p(x k y 1:k 1 ). This information cannot be predicted from the state space model.

8 Properties of the Innovations Sequence The innovations process has three important properties: orthogonality The innovation process s K at time K is orthogonal to all past observations y 1,..., y k 1, such that E{s K y T k } = 0 1 k K 1. whiteness The innovations are orthogonal to each other, such that E{s K s T k } = 0 1 k K 1. reversibility There is a one-to-one correspondence between the observed data y 1:k = {y 1,..., y k } and the sequence of innovations s 1:k = {s 1,..., s k }, such that one can always be uniquely recover from the other.

9 Outline of Kalman Filter Development We will now present the principal quantities and relations in the operation of the KF. This presentation is intended to convey intuition rather than provide a rigorous derivation. The development of the KF will proceed in four phases: 1 Provide an expression for calculating the covariance matrix of the innovations process. 2 Obtain an expression for the sequential update of the MMSE state estimate. 3 Define the Kalman gain, which plays the pivotal role in the sequential state update considered in phase two. 4 State the Riccati equation, which provides the means to update the state estimation error covariance matrices required to calculate the Kalman gain.

10 Predicted State Estimation Error We begin by stating how the predicted observation may be calculated based on the current state estimate, according to, ŷ k k 1 = H k ˆx k k 1. (9) In light of (8) and (9), we may write Substituting (2) into (10), we find where s k = y k H k ˆx k k 1. (10) s k = H k ɛ k k 1 + v k, (11) ɛ k k 1 = x k ˆx k k 1 (12) is the predicted state estimation error at time k, using data up to time k 1.

11 Predicted State Estimation Error Covariance Matrix It holds that ɛ k k 1 is orthogonal to u k and v k. The correlation matrix of the innovations sequence can be expressed as { } S k E s k s T k = H k K k k 1 H T k + V k, (13) where the predicted state estimation error covariance matrix is defined as { } K k k 1 E ɛ k k 1 ɛ T k k 1. (14)

12 Kalman Gain The sequential update of the filtering density has two steps: 1 First, there is a prediction, which can be expressed as ˆx k k 1 = F k k 1ˆx k 1 k 1, (15) The prediction does not use the current observation y k. 2 Second, there is an update or correction, where the Kalman gain is defined as ˆx k k = ˆx k k 1 + G k s k, (16) G k E{x k s T k }S 1 k, (17) for x k, s k, and S k given by (1), (10), and (13).

13 Predictor-Corrector Structure To perform the Kalman update it is necessary to: Premultiply the prior estimate ˆx k k 1 by F k k 1. Add a correction factor consisting of the Kalman gain G k multiplied by the innovation s k. The predictor-corrector structure of the Kalman filter is shown below. The problem of Kalman filtering reduces to calculating the Kalman gain. s k + y k + G k + - xˆ k k-1 yˆ k k-1 H k F k k-1 xˆ k-1 k-1 z -1 I ˆ x k k

14 Riccati Equation The Kalman gain (17) can be calculated as G k = K k k 1 H T k S 1 k, (18) where the correlation matrix S k of the innovations sequence is defined in (13). The Riccati equation, then specifies how K k k 1 can be sequentially updated, namely as, K k k 1 = F k k 1 K k 1 F T k k 1 + U k 1. (19)

15 Filtered State Estimation Error The matrix K k in (19) is, in turn, obtained through the recursion, K k = K k k 1 G k H k K k k 1 = (I G k H k )K k k 1. (20) This matrix K k can be interpreted as the covariance matrix of the filtered state estimation error, { } K k ɛ k ɛ T k, where ɛ k x k ˆx k k.

16 Filtered vs. Predicted State Estimation Error ɛ k k 1 is the error in the state estimate made without knowledge of the current observation y k, ɛ k is the error in the state estimate with knowledge of y k. yk ŷk k-1 Sk State Space ˆ Hkxk k-1 Observation Space xk k-1 ˆ Kk Kk k-1 Figure: The operation of the Kalman filter.

17 Kalman Filtering Algorithm The steps leading to the sequential update of ˆx k k are: ˆx k k 1 = F k k 1 ˆx k 1 k 1 (prediction) (21) K k k 1 = F k k 1 K k 1 F T k k 1 + U k 1 (22) S k = H k K k k 1 H T k + V k (23) G k = K k k 1 H T k S 1 k (24) s k = y k H k ˆx k k 1 (25) ˆx k k = ˆx k k 1 + G k s k (correction) (26) K k = (I G k H k ) K k k 1 (27)

18 Prediction and Correction (Again) The connection between the KF and sequential Bayesian filtering can be stated explicitly as writing the prediction as and the correction as p(x k y 1:k 1 ) = N (x k ; ˆx k k 1, K k k 1 ) (28) p(x k y 1:k ) = N (x k ; ˆx k k, K k ), (29) where N (x; µ, Σ) is the multidimensional Gaussian pdf with mean vector µ and covariance matrix Σ.

19 PDAF Illustration y k (1) y k (3) (2) y k ŷ k k-1 S k State Space H kxˆ k k-1 Observation Space xˆ k k-1 K k K k k-1 Figure: Schematic illustrating the operation of the probabilistic data association filter.

20 JPDAF Illustration ith State Space (1) y k (2) y k (i) (3) ŷ y k k-1 k (i) S k (i) H kxˆ k k-1 Observation Space y k (6) (5) y k (4) y ( k ŷ j ) k k-1 ( j) S k ( j) H kxˆ k k-1 (i) K k (i) xˆ k k-1 ( j) K k (i) K k k-1 jth State Space ( j) xˆ k k-1 ( j) K k k-1 Figure: Schematic illustrating the operation of the joint probabilistic data association filter.

21 Speaker Tracking with the Kalman Filter Solving for that x minimizing (43) would be eminently straightforward were it not for the fact that the objective function is nonlinear in x. In the coming development, we will find it useful to have a linear approximation. Hence, we take a partial derivative with respect to x and write where x T n (x) = 1 [ x c mn1 D n1 x m n2 D n2 ], D nm = x m nm n = 1,..., M; m = 1, 2, is the distance between the source and the microphone located at m nm.

22 A Taylor Series Approximation The speaker s position cannot change instantaneously. Thus both the present ˆτ i (k) and past TDOA estimates {ˆτ i (n)} k 1 n=1 are potentially useful in estimating a speaker s current position x(k). Let us approximate T n (x) with a first order Taylor series expansion about the last position estimate ˆx(k 1) by writing T n (x) T n (ˆx(k 1)) + c T n (k) [x ˆx(k 1)], (30) where the row vector c T n (k) is given by c T n (k) = [ x T n (x)] T x=ˆx(k 1) = 1 [ x c mn1 x m ] T n2 D n1 D n2 x=ˆx(k 1) (31)

23 Linearization Substituting the linearization (30) into (43) provides ɛ(x; k) = M 1 σ 2 n=1 n [ τ n (k) c T n (k)x] 2 (32) where [ ] τ n (k) = ˆτ n (k) T n (ˆx(k 1)) c T n (k)ˆx(k 1) n = 1,..., M. (33) Let us define τ 1 (k) ˆτ 1 (k) τ 2 (k) τ(k) =., ˆτ(k) = ˆτ 2 (k)., (34) τ M (k) ˆτ M (k)

24 Linearization (cont d.) Also define, T 1 (ˆx(k)) c T 1 T 2 (ˆx(k)) (k) T(ˆx(k)) =., C(k) = c T 2 (k).. (35) T M (ˆx(k)) c T M (k)

25 Linearization (cont d.) Then (33) can be expressed in matrix form as τ(k) = ˆτ(k) [T(ˆx(k 1)) C(k)ˆx(k 1)]. (36) Similarly, defining Σ = diag(σ1 2, σ2 2,, σ2 M ) (37) enables (32) to be expressed as ɛ(x; t) = [ τ(k) C(k)x] T Σ 1 [ τ(k) C(k)x]. (38)

26 Definition: Time Delay of Arrival Consider then a sensor array consisting of N + 1 microphones located at positions m i i = 0,..., N. Let x R 3 denote the position of the speaker in a three-dimensional space, such that, x m n,x x y, and m n m n,y. z m n,z The time delay of arrival (TDOA) between the microphones at positions m 1 and m 2 can be expressed as T (m 1, m 2, x) = x m 1 x m 2 c where c is the speed of sound. (39)

27 Phase Transform Let ˆτ mn denote the observed TDOA for the m-th and nth microphones. The TDOAs can be observed or estimated with a variety of well-known techniques. The most common method is based on the phase transform, ρ mn (τ) 1 π Y m (e jωτ )Yn (e jωτ ) 2π π Ym (e jωτ )Yn (e jωτ ) ejωτ dω, (40) where Y n (e jωτ ) denotes the short-time Fourier transform of the signal arriving at the nth sensor in the array.

28 Time Delay of Arrival The definition of the GCC in (40) follows directly from the frequency domain calculation of the cross-correlation of two sequences. The normalization term Y m (e jωτ )Y n (e jωτ ) in the denominator of the integrand in (40) is intended to weight all frequencies equally. Such a weighting leads to more robust TDOA estimates in noisy and reverberant environments. The TDOA estimate is obtained from ˆτ mn = max τ ρ mn (τ). (41)

29 Details of Calculating TDOAs In other words, the true TDOA is taken as that which maximizes the GCC ρ mn (τ). For reasons of computational efficiency, ρ mn (τ) is typically calculated with an inverse FFT. Thereafter, an interpolation is performed to overcome the granularity in the estimate corresponding to the sampling interval. Usually, Y n (e jω k ) appearing in (40) are calculated with a Hamming analysis window of 15 to 25 ms in duration.

30 Maximum Likelihood Speaker Tracking Consider two microphones located at m n1 and m n2 comprising the nth microphone pair. Define the TDOA as T (m 1, m 2, x) = x m 1 x m 2 (42) c where c is the speed of sound, x denotes the position of an active speaker. Source localization based on the maximum likelihood criterion proceeds by minimizing the error function ɛ(x) = M 1 σ 2 i=n n [ˆτ n T n (x)] 2, (43) where σ 2 n is the error covariance and ˆτ n is the observed TDOA from (40) and (41).

31 Sound Propagation l z r a Plane Wave θ z=r cos θ d ϕ x=r sin θcosϕ l y l x y=r sin θsin ϕ Figure: Relation between the spherical coordinates (r, θ, φ) and Cartesian coordinates (x, y, z).

32 Array Processing We process the output of each sensor with a LTI filter with impulse response h n (τ) and sum the outputs: y(t) = In matrix notation where N 1 n=0 y(t) = h(t) = h n (t τ) f n (τ, m n ) dτ h T (t τ) f(τ, m) dτ h 0 (t) h 1 (t). h N 1 (t)

33 Frequency Domain Array Processing In the frequency domain, this becomes Y (ω) = y(t) e jωt dt = H T (ω) F(ω) where H(ω) = F(ω, m) = h(t)e jωt dt f(t, m)e jωt dt

34 Array Manifold Vector In acoustic beamforming it may happen, however, that the distance from the source to any one of the sensors of an array is of the same order as the aperture of the array itself. The array manifold vector must be defined as v x (x) e jω x m 0 /c e jω x m 1 /c. e jω x m N 1 /c, where x is the position of the desired source in Cartesian coordinates.

35 Delay-and-Sum Beamformer Implementations Time Domain Implementation Subband Domain Implementation f(t-τ 0 ) δ(t+τ 0 ) f(t-τ 0 ) e jωkτ0 f(t-τ 1 ) δ(t+τ 1 ) + 1/N f(t) f(t-τ 1 ) + 1/N f(t) e jωkτ1 f(t-τ N-1 ) δ(t+τ N-1 ) f(t-τ N-1 ) e jωkτn-1 Figure: Time and subband domain implementations of the delay-and-sum beamformer.

36 Beam Pattern in Various Spaces The beam pattern is given by B φ (φ) = 1 sin ( N 2 2π λ N sin ( 1 2 2π λ In u-space this becomes B u (u) = 1 N And in ψ-space, B ψ (ψ) = 1 N cos φ d) for 0 φ π cos φ d) sin ( πnd λ u) sin ( for 1 u 1 πd λ u) ( sin N ψ 2 sin ( ψ 2 ) ) for 2πd λ ψ 2πd λ

37 Effect of Element Spacing Response Function [db] Response Function [db] Response Function [db] Visible Region 0 -¼ ½ ¾ u-space, d=λ/4 0 -¼ ½ ¾ u-space, d=λ/2 0 -¼ ½ ¾ u-space, d=λ ¼ +½ +¾ +¼ +½ +¾ +¼ +½ +¾ Figure: Effect of element spacing on beam patterns in linear and polar coordinates for N = 10.

38 Effect of Steering Response Function [db] Response Function [db] Visible Region 0 -¼ ½ ¾ u-space, d=2λ/3, ϕ=45 0 -¼ ½ ¾ u-space, d=λ/2, ϕ= ¼ +½ +¾ +¼ +½ +¾ Figure: Effect of steering on the grating lobes for N = 20 plotted in linear and polar coordinates.

39 Inter-Sensor Spacing The position of the first grating lobe is u u T = λ/d. Hence, keeping grating lobes out of the visible region requires u u T λ d. This inequality can be further overbounded as u u T u + u T 1 + sin φ max λ d, where φ max is the maximum broadside angle to which the pattern is to be steered. Excluding grating lobes from the visible region requires d λ sin φ max. If the array is to be steered over the entire half plane, it must hold that d λ/2.

40 Array Gain We will in all cases take word error rate (WER) as the most important measure of system performance. It is useful, however, to have other performance measures intended a single component. Signal-to-noise ratio (SNR), a very common metric for signal quality, is the ratio of signal power to noise power. Array gain is a measure of how much improvement in SNR is achieved by a sensor array. Array gain is the ratio of SNR at the output of the array to that at input of any given sensor.

41 Snapshot Model Let X(ω) C N denote a subband domain snapshot: X(ω) = F(ω) + N(ω), (44) where F(ω) denotes the subband-domain snapshot of the desired signal and N(ω) denotes that of the noise or other interference impinging on the sensors of the array. By assumption, F(ω) and N(ω) are uncorrelated and the signal vector F(ω) is F(ω) = F(ω) v k (k), (45) where v k (k) is the array manifold vector.

42 Second Order Statistics We now introduce the notation necessary for specifying the second order statistics of random variables and vectors. In general, for some complex scalar random variable Y (ω), we will define Σ Y (ω) E{ Y (ω) 2 }. Similarly, for a complex random vector X(ω), let us define the spatial spectral matrix as Σ X (ω) E{X(ω)X H (ω)}.

43 Signal and Noise Components Let us begin by assuming that the component of the desired signal reaching each component of a sensor array is F(ω) and the component of the noise or interference reaching each sensor is N(ω). This implies that the SNR at the input of the array can be expressed as SNR in (ω) Σ F (ω) Σ N (ω). (46) In the frequency domain, the output of the beamformer is Y (ω) = y(t) e jωt dt = H T (ω) X(ω). (47)

44 Signal and Noise Components (cont d.) Let us now define w H (ω) = H T (ω). Then (47) can be rewritten as Y (ω) = w H (ω) X(ω) = Y F (ω) + Y N (ω), (48) where Y F (ω) = w H (ω) F(ω) and Y N (ω) = w H (ω) N(ω) are, respectively, the signal and noise components in the output of the beamformer.

45 Beamformer Output When the delay-and-sum beamformer (DSB) is steered to wavenumber k = k T, the sensor weights become w H = 1 N v k H (k T ). (49) The variance of the output of the beamformer can be calculated according to where Σ Y (ω) = E{ Y (ω) 2 } = Σ YF (ω) + Σ YN (ω), (50) Σ YF (ω) = w H (ω) Σ F (ω) w(ω), (51) is the signal component of the beamformer output, and is the noise component. Σ YN (ω) = w H (ω) Σ N (ω) w(ω), (52) Equations (50 52) follow directly from the assumption that F(ω) and N(ω) are uncorrelated.

46 Signal Component in Beamformer Output Expressing the snapshot of the desired signal once more as in (45), we find that the spatial spectral matrix F(ω) of the desired signal can be written as where Σ F (ω) = { F(ω) 2 }. Σ F (ω) = Σ F (ω) v k (k s ) v H k (k s), (53) Substituting (53) into (51), we can calculate the output signal spectrum as Σ YF (ω) = w H (ω) v k (k s ) Σ F (ω) v H k (k s) w(ω) = Σ F (ω), (54) where the final equality follows from the definition (49) of the DSB.

47 Array Gain of Delay-and-Sum Beamformer Substituting (49) into (52) it follows that the noise component present at the output of the DSB is given by Σ YN (ω) = 1 N 2 vh (k s ) Σ N (ω) v(k s ) = 1 N 2 vh (k s )ρ N (ω)v(k s )Σ N (ω), (55) where the normalized spatial spectral matrix ρ N (ω) is Σ N (ω) Σ N (ω) ρ N (ω). (56) Hence, the SNR at the output of the beamformer is SNR out (ω) Σ Y F (ω) Σ YN (ω) = Σ F (ω) w H (ω) Σ N (ω)w(ω). (57)

48 Array Gain of DSB (cont d.) Then based on (46) and (57), we can calculate the array gain of the DSB as A dsb (ω, k s ) = Σ / Y F (ω) ΣF (ω) Σ YN (ω) Σ N (ω) = N 2 v H (k s ) ρ N (ω) v(k s ). (58)

49 Distortionless Constraint Many adaptive beamforming algorithms impose a distortionless constraint. In particular, for a plane wave arriving along the main response axis k s Y (ω) = F(ω), (59) where Y (ω) is the beamformer output, and F(ω) is the Fourier transform of the source signal. It follows that Y (ω) = F(ω) w H (ω) v(k s ) = F(ω). Hence, the distortionless constraint can be expressed as w H (ω) v(k s ) = 1. (60)

50 Distortionless Constraint and the DSB Clearly setting w H (ω) = 1 N vh (k s ), as is the case for the DSB, will satisfy (60). Thus, the DSB satisfies the distortionless constraint.

51 Snapshot Model Now let us characterize the noise snapshot model as a zero-mean process with spatial spectral matrix Σ N (ω) = E{N(ω)N H (ω)} = Σ c (ω) + σ 2 wi, where Σ c and σ 2 wi are the spatially correlated and uncorrelated portions, respectively, of the noise covariance matrix. Spatially correlated interference is due to the propagation of some interfering signal through space. Uncorrelated noise is typically due to the self-noise of the sensors.

52 Snapshot Model (cont d.) The beamformer output will be specified as Y (ω) = w H (ω) X(ω) = Y F (ω) + Y N (ω). When noise is present, we can write Y (ω) = F(ω) + Y N (ω), where Y N (ω) = w H (ω)n(ω) is the noise component remaining in the beamformer output.

53 Optimization Criterion In addition to satisfying the distortionless constraint, we wish also to minimize the variance of the output. To solve the constrained optimization problem, we can apply the method of Lagrange multipliers. To wit, we first define the symmetric objective function F w H (ω) Σ N (ω) w(ω)+λ[w H (ω)v(k s ) 1]+λ [v H (k s )w 1], where λ is a complex Lagrange multiplier, to incorporate the constraint into the objective function. Taking the complex gradient with respect to w H, equating this gradient to zero, and solving yields w H mvdr (ω) = λ vh (k s ) Σ 1 N (ω). (61)

54 Minimum Variance Distortionless Response Beamformer Applying now the distortionless constraint (60), we find [ 1 λ = v H (k s ) Σ 1 N s)] (ω) v(k. Thus, the optimal sensor weights are given by w H o (ω) = Λ(ω) v H (k s ) Σ 1 N (ω) = wh mvdr (ω), (62) where Λ(ω) [ v H (k s ) Σ 1 N (ω) v(k s)] 1. (63)

55 MVDR Schematic X(ω) Λ(ω) v H (ω:k s ) ΣN(ω) -1 Y(ω) The figure shows a schematic of the MVDR beamformer. Λ(ω) is the spectral power of the noise component in Y (ω), as is apparent from Σ YN (ω) = w H mvdr (ω) Σ N(ω) w mvdr (ω) = v H (k s ) Σ 1 N (ω) Σ N(ω) Σ 1 N (ω) v(k s) Λ 2 (ω) = Λ(ω). (64)

56 Advantages of Subband Processing The foregoing implies that the sensor weights for each subband are designed independently. This is one of the chief advantages of subband domain adaptive beamforming. In particular, the transformation into the subband domain has the effect of a divide and conquer optimization scheme. A single optimization problem over MN free parameters, where M is the number of subbands and N is the number of sensors, is converted into M optimization problems, each with N free parameters. Each of the M optimization problems can be solved independently, which is a direct result of the statistical independence of the subband samples.

57 Array Gain of MVDR Beamformer As w H mvdr (ω) satisfies the distortionless constraint, the power spectrum of the desired signal at the output of the beamformer can be expressed as Σ YF (ω) = Σ F (ω), where Σ F (ω) is the power spectrum of the desired signal F(ω) at the input of each sensor. Hence, based on (64), the output SNR can be written as Σ F (ω) Σ YN (ω) = Σ F (ω) Λ(ω).

58 MVDR Beamformer with Plane Wave Interference Consider a desired signal with array manifold vector v(k s ) and a single plane-wave interfering signal with manifold vector v(k 1 ). In addition, there is uncorrelated sensor noise with power σ 2 w. In this case, the spatial spectral matrix Σ N (ω) of the noise can be expressed as Σ N (ω) = σ 2 w I + M 1 (ω) v(k 1 ) v H (k 1 ), (65) where M 1 (ω) is the spectrum of the interfering signal.

59 Plane Wave Interference (cont d.) Applying the matrix inversion lemma to (65) provides [ I Σ 1 N = 1 σ 2 w M 1 σ 2 w + N M 1 v 1 v H 1 ], (66) where we have suppressed ω and k for convenience, and defined v 1 v(k 1 ). The noise spectrum at each element of the array can be expressed as Σ N = σ 2 w + M 1. (67)

60 MVDR Beamformer with Plane Wave Interference (cont d.) Substituting (66) into (62), we find [ I w H mvdr = Λ σ 2 w v H s M 1 σ 2 w + N M 1 v 1 v H 1 ]. (68) The spatial correlation coefficient between the desired signal and the interference is defined as ρ s1 vh s v 1 N, (69)

61 Plane Wave Interference (cont d.) Note that ρ s1 = B dsb (k 1 : k s ), where B dsb (k 1 : k s ) is the delay-and-sum beam pattern aimed at k s, the wavenumber of the desired signal, and evaluated at k 1, the wavenumber of the interference. With this definition (68) can be rewritten as w H mvdr = Λ [ ] σw 2 v H NM 1 s ρ s1 σw 2 v H 1. (70) + NM 1

62 Schematic: MVDR Beamformer with Plane Wave Interference General Case X(ω) v H S N v H 1 N NM 1 2 σ w +NM 1 Nˆ 1 (ω) ρ s Λ Y(ω) High Interference-to-Noise Ratio X(ω) P I Λ v H 2 (k S ) σ w Y(ω) Figure: Optimum MVDR beamformer in the presence of a single interferer.

63 Analysis of Plane Wave Interference The normalizing coefficient (63) then reduces to { [ 1 Λ = σw 2 N 1 NM ]} 1 1 σw 2 ρ s1 2. (71) + NM 1 It is clear that the upper and lower branches of this MVDR beamformer correspond to conventional beamformers pointing at the desired signal and the interference, respectively. The necessity of the bottom branch is readily apparent if we reason as follows: The path labeled ˆN 1 (ω) is the minimum mean-square estimate of the interference plus noise. This noise estimate is scaled by ρ s1 and subtracted from the output of the DSB in the upper path, in order to remove that portion of the noise and interference captured by the upper path.

64 Analysis (cont d.) Observe that in the case where NM 1 σ 2 w, we may rewrite (70) as w H mvdr = Λ σ 2 w v H s P I, where P I = I v 1 v H 1 is the projection matrix onto the space orthogonal to the interference. This case is shown schematically in Figure 9, which indicates that the beamformer is placing a perfect null on the interference.

65 Limiting Case: Plane Wave Interference Substituting (67) and (71) and provides [ ] A mvdr = N(1 + σi 2) 1 + NσI 2(1 ρ s1 2 ) 1 + NσI 2, where the interference-to-noise ratio (INR), defined as σi 2 M 1 σw 2, is the ratio of spatially correlated to uncorrelated noise. Observe that the suppression of the interference is not perfect when either σi 2 is verly low, or u I is very small such that the interference moves within the main lobe region of the delay-and-sum beam pattern.

66 MVDR Beam Patterns Beam pattern (db) Beam pattern (db) σ 2 I =-10dB, u I = u-space σ 2 I =20dB, u I = u-space Beam pattern (db) Beam pattern (db) σ 2 I =-10dB, u I = u-space σ 2 I =20dB, u I = u-space Figure: MVDR beam patterns for single plane wave interference

67 Performance Curves The array gain of the DSB in the presence of a single interferer is readily obtained by substituting (65) and (67) into (58), whereupon we find A dsb (ω, k s ) = N 2 (σ 2 w + M 1 ) v H s (σ 2 wi + M 1 v 1 v H 1 )v s = N(1 + σ2 I ) 1 + σ 2 I N ρ s1 2. The array gains for both DSB and optimal MVDR beamformer at various INR levels are plotted in the figure.

68 Summary In this lecture, we presented the principal components and steps in the Kalman filter. We also discussed how these components can be sequentially updated. In addition, we presented the interpretation of the Kalman filter as either the: optimal minimum mean square error estimator, or optimal Bayesian filter. We also presented two variations of the classic Kalman filter: the probabilistic data association filter; the joint probabilistic data association filter. In addition, we covered the basics of beamforming.

To Separate Speech! A System for Recognizing Simultaneous Speech

To Separate Speech! A System for Recognizing Simultaneous Speech A System for Recognizing Simultaneous Speech John McDonough 1,2,Kenichi Kumatani 2,3,Tobias Gehrig 4, Emilian Stoimenov 4, Uwe Mayer 4, Stefan Schacht 1, Matthias Wölfel 4 and Dietrich Klakow 1 1 Spoken

More information

Microphone-Array Signal Processing

Microphone-Array Signal Processing Microphone-Array Signal Processing, c Apolinárioi & Campos p. 1/30 Microphone-Array Signal Processing José A. Apolinário Jr. and Marcello L. R. de Campos {apolin},{mcampos}@ieee.org IME Lab. Processamento

More information

Microphone-Array Signal Processing

Microphone-Array Signal Processing Microphone-Array Signal Processing, c Apolinárioi & Campos p. 1/115 Microphone-Array Signal Processing José A. Apolinário Jr. and Marcello L. R. de Campos {apolin},{mcampos}@ieee.org IME Lab. Processamento

More information

Microphone-Array Signal Processing

Microphone-Array Signal Processing Microphone-Array Signal Processing, c Apolinárioi & Campos p. 1/27 Microphone-Array Signal Processing José A. Apolinário Jr. and Marcello L. R. de Campos {apolin},{mcampos}@ieee.org IME Lab. Processamento

More information

Overview of Beamforming

Overview of Beamforming Overview of Beamforming Arye Nehorai Preston M. Green Department of Electrical and Systems Engineering Washington University in St. Louis March 14, 2012 CSSIP Lab 1 Outline Introduction Spatial and temporal

More information

ADAPTIVE ANTENNAS. SPATIAL BF

ADAPTIVE ANTENNAS. SPATIAL BF ADAPTIVE ANTENNAS SPATIAL BF 1 1-Spatial reference BF -Spatial reference beamforming may not use of embedded training sequences. Instead, the directions of arrival (DoA) of the impinging waves are used

More information

Single and Multi Channel Feature Enhancement for Distant Speech Recognition

Single and Multi Channel Feature Enhancement for Distant Speech Recognition Single and Multi Channel Feature Enhancement for Distant Speech Recognition John McDonough (1), Matthias Wölfel (2), Friedrich Faubel (3) (1) (2) (3) Saarland University Spoken Language Systems Overview

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

Tracking and Far-Field Speech Recognition for Multiple Simultaneous Speakers

Tracking and Far-Field Speech Recognition for Multiple Simultaneous Speakers Tracking and Far-Field Speech Recognition for Multiple Simultaneous Speakers Tobias Gehrig and John McDonough Institut für Theoretische Informatik Universität Karlsruhe Am Fasanengarten 5, 76131 Karlsruhe,

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Progress In Electromagnetics Research Letters, Vol. 16, 53 60, 2010 A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Y. P. Liu and Q. Wan School of Electronic Engineering University of Electronic

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

New Statistical Model for the Enhancement of Noisy Speech

New Statistical Model for the Enhancement of Noisy Speech New Statistical Model for the Enhancement of Noisy Speech Electrical Engineering Department Technion - Israel Institute of Technology February 22, 27 Outline Problem Formulation and Motivation 1 Problem

More information

CHAPTER 3 ROBUST ADAPTIVE BEAMFORMING

CHAPTER 3 ROBUST ADAPTIVE BEAMFORMING 50 CHAPTER 3 ROBUST ADAPTIVE BEAMFORMING 3.1 INTRODUCTION Adaptive beamforming is used for enhancing a desired signal while suppressing noise and interference at the output of an array of sensors. It is

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

An Adaptive Sensor Array Using an Affine Combination of Two Filters

An Adaptive Sensor Array Using an Affine Combination of Two Filters An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia

More information

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego Spatial Smoothing and Broadband Beamforming Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books and Papers 1. Optimum Array Processing, H. L. Van Trees 2. Stoica, P.,

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Received Signal, Interference and Noise

Received Signal, Interference and Noise Optimum Combining Maximum ratio combining (MRC) maximizes the output signal-to-noise ratio (SNR) and is the optimal combining method in a maximum likelihood sense for channels where the additive impairment

More information

Beamspace Adaptive Beamforming and the GSC

Beamspace Adaptive Beamforming and the GSC April 27, 2011 Overview The MVDR beamformer: performance and behavior. Generalized Sidelobe Canceller reformulation. Implementation of the GSC. Beamspace interpretation of the GSC. Reduced complexity of

More information

The Selection of Weighting Functions For Linear Arrays Using Different Techniques

The Selection of Weighting Functions For Linear Arrays Using Different Techniques The Selection of Weighting Functions For Linear Arrays Using Different Techniques Dr. S. Ravishankar H. V. Kumaraswamy B. D. Satish 17 Apr 2007 Rajarshi Shahu College of Engineering National Conference

More information

arxiv: v1 [cs.it] 6 Nov 2016

arxiv: v1 [cs.it] 6 Nov 2016 UNIT CIRCLE MVDR BEAMFORMER Saurav R. Tuladhar, John R. Buck University of Massachusetts Dartmouth Electrical and Computer Engineering Department North Dartmouth, Massachusetts, USA arxiv:6.272v [cs.it]

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation. Bhaskar D Rao University of California, San Diego

Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation. Bhaskar D Rao University of California, San Diego Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books 1. Optimum Array Processing, H. L. Van Trees 2.

More information

Linear Optimum Filtering: Statement

Linear Optimum Filtering: Statement Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error

More information

Adaptive beamforming. Slide 2: Chapter 7: Adaptive array processing. Slide 3: Delay-and-sum. Slide 4: Delay-and-sum, continued

Adaptive beamforming. Slide 2: Chapter 7: Adaptive array processing. Slide 3: Delay-and-sum. Slide 4: Delay-and-sum, continued INF540 202 Adaptive beamforming p Adaptive beamforming Sven Peter Näsholm Department of Informatics, University of Oslo Spring semester 202 svenpn@ifiuiono Office phone number: +47 22840068 Slide 2: Chapter

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have completed studying digital sources from information theory viewpoint We have learnt all fundamental principles for source coding, provided by information theory Practical

More information

Single Channel Signal Separation Using MAP-based Subspace Decomposition

Single Channel Signal Separation Using MAP-based Subspace Decomposition Single Channel Signal Separation Using MAP-based Subspace Decomposition Gil-Jin Jang, Te-Won Lee, and Yung-Hwan Oh 1 Spoken Language Laboratory, Department of Computer Science, KAIST 373-1 Gusong-dong,

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

An Information-Theoretic View of Array Processing

An Information-Theoretic View of Array Processing 392 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 7, NO. 2, FEBRUARY 2009 An Information-Theoretic View of Array Processing Jacek Dmochowski, Jacob Benesty, and Sofiène Affes Abstract

More information

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment Applications Local alignment: Tracking Stereo Global alignment: Camera jitter elimination Image enhancement Panoramic

More information

AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL. Tofigh Naghibi and Beat Pfister

AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL. Tofigh Naghibi and Beat Pfister AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL Tofigh Naghibi and Beat Pfister Speech Processing Group, Computer Engineering and Networks Lab., ETH Zurich, Switzerland {naghibi,pfister}@tik.ee.ethz.ch

More information

Random Processes Handout IV

Random Processes Handout IV RP-IV.1 Random Processes Handout IV CALCULATION OF MEAN AND AUTOCORRELATION FUNCTIONS FOR WSS RPS IN LTI SYSTEMS In the last classes, we calculated R Y (τ) using an intermediate function f(τ) (h h)(τ)

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

ON THE NOISE REDUCTION PERFORMANCE OF THE MVDR BEAMFORMER IN NOISY AND REVERBERANT ENVIRONMENTS

ON THE NOISE REDUCTION PERFORMANCE OF THE MVDR BEAMFORMER IN NOISY AND REVERBERANT ENVIRONMENTS 0 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) ON THE NOISE REDUCTION PERFORANCE OF THE VDR BEAFORER IN NOISY AND REVERBERANT ENVIRONENTS Chao Pan, Jingdong Chen, and

More information

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain Electrical Engineering Department Technion - Israel Institute of Technology Supervised by: Prof. Israel Cohen Outline

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395 Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 42 Outline 1 Introduction 2 Feature selection

More information

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References 24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained

More information

EE6604 Personal & Mobile Communications. Week 13. Multi-antenna Techniques

EE6604 Personal & Mobile Communications. Week 13. Multi-antenna Techniques EE6604 Personal & Mobile Communications Week 13 Multi-antenna Techniques 1 Diversity Methods Diversity combats fading by providing the receiver with multiple uncorrelated replicas of the same information

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

DIFFUSION-BASED DISTRIBUTED MVDR BEAMFORMER

DIFFUSION-BASED DISTRIBUTED MVDR BEAMFORMER 14 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) DIFFUSION-BASED DISTRIBUTED MVDR BEAMFORMER Matt O Connor 1 and W. Bastiaan Kleijn 1,2 1 School of Engineering and Computer

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal

More information

3F1 Random Processes Examples Paper (for all 6 lectures)

3F1 Random Processes Examples Paper (for all 6 lectures) 3F Random Processes Examples Paper (for all 6 lectures). Three factories make the same electrical component. Factory A supplies half of the total number of components to the central depot, while factories

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

Virtual Array Processing for Active Radar and Sonar Sensing

Virtual Array Processing for Active Radar and Sonar Sensing SCHARF AND PEZESHKI: VIRTUAL ARRAY PROCESSING FOR ACTIVE SENSING Virtual Array Processing for Active Radar and Sonar Sensing Louis L. Scharf and Ali Pezeshki Abstract In this paper, we describe how an

More information

EE 5407 Part II: Spatial Based Wireless Communications

EE 5407 Part II: Spatial Based Wireless Communications EE 5407 Part II: Spatial Based Wireless Communications Instructor: Prof. Rui Zhang E-mail: rzhang@i2r.a-star.edu.sg Website: http://www.ece.nus.edu.sg/stfpage/elezhang/ Lecture II: Receive Beamforming

More information

Digital Image Processing Lectures 13 & 14

Digital Image Processing Lectures 13 & 14 Lectures 13 & 14, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Properties of KL Transform The KL transform has many desirable properties which makes

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

Polynomial Matrix Formulation-Based Capon Beamformer

Polynomial Matrix Formulation-Based Capon Beamformer Alzin, Ahmed and Coutts, Fraser K. and Corr, Jamie and Weiss, Stephan and Proudler, Ian K. and Chambers, Jonathon A. (26) Polynomial matrix formulation-based Capon beamformer. In: th IMA International

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering Statistical Filtering and Control for AI and Robotics Part II. Linear methods for regression & Kalman filtering Riccardo Muradore 1 / 66 Outline Linear Methods for Regression Gaussian filter Stochastic

More information

Chris Bishop s PRML Ch. 8: Graphical Models

Chris Bishop s PRML Ch. 8: Graphical Models Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular

More information

Lecture 3: Pattern Classification

Lecture 3: Pattern Classification EE E6820: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 1 2 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mixtures

More information

Application of the Tuned Kalman Filter in Speech Enhancement

Application of the Tuned Kalman Filter in Speech Enhancement Application of the Tuned Kalman Filter in Speech Enhancement Orchisama Das, Bhaswati Goswami and Ratna Ghosh Department of Instrumentation and Electronics Engineering Jadavpur University Kolkata, India

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

Spatial Statistics with Image Analysis. Lecture L08. Computer exercise 3. Lecture 8. Johan Lindström. November 25, 2016

Spatial Statistics with Image Analysis. Lecture L08. Computer exercise 3. Lecture 8. Johan Lindström. November 25, 2016 C3 Repetition Creating Q Spectral Non-grid Spatial Statistics with Image Analysis Lecture 8 Johan Lindström November 25, 216 Johan Lindström - johanl@maths.lth.se FMSN2/MASM25L8 1/39 Lecture L8 C3 Repetition

More information

Kalman Filter. Man-Wai MAK

Kalman Filter. Man-Wai MAK Kalman Filter Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S. Gannot and A. Yeredor,

More information

Conventional beamforming

Conventional beamforming INF5410 2012. Conventional beamforming p.1 Conventional beamforming Slide 2: Beamforming Sven Peter Näsholm Department of Informatics, University of Oslo Spring semester 2012 svenpn@ifi.uio.no Office telephone

More information

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego Direction of Arrival Estimation: Subspace Methods Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu Reference Books and Papers 1 Optimum Array Processing, H L Van Trees 2 Stoica, P,

More information

Beamforming. A brief introduction. Brian D. Jeffs Associate Professor Dept. of Electrical and Computer Engineering Brigham Young University

Beamforming. A brief introduction. Brian D. Jeffs Associate Professor Dept. of Electrical and Computer Engineering Brigham Young University Beamforming A brief introduction Brian D. Jeffs Associate Professor Dept. of Electrical and Computer Engineering Brigham Young University March 2008 References Barry D. Van Veen and Kevin Buckley, Beamforming:

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University ENSC327 Communications Systems 2: Fourier Representations Jie Liang School of Engineering Science Simon Fraser University 1 Outline Chap 2.1 2.5: Signal Classifications Fourier Transform Dirac Delta Function

More information

Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water

Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water p. 1/23 Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water Hailiang Tao and Jeffrey Krolik Department

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

III.C - Linear Transformations: Optimal Filtering

III.C - Linear Transformations: Optimal Filtering 1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection REGLERTEKNIK Lecture Outline AUTOMATIC CONTROL Target Tracking: Lecture 3 Maneuvering Target Tracking Issues Maneuver Detection Emre Özkan emre@isy.liu.se Division of Automatic Control Department of Electrical

More information

ECE531 Lecture 8: Non-Random Parameter Estimation

ECE531 Lecture 8: Non-Random Parameter Estimation ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction

More information

ENSC327 Communications Systems 2: Fourier Representations. School of Engineering Science Simon Fraser University

ENSC327 Communications Systems 2: Fourier Representations. School of Engineering Science Simon Fraser University ENSC37 Communications Systems : Fourier Representations School o Engineering Science Simon Fraser University Outline Chap..5: Signal Classiications Fourier Transorm Dirac Delta Function Unit Impulse Fourier

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Multi-Channel Speech Separation with Soft Time-Frequency Masking

Multi-Channel Speech Separation with Soft Time-Frequency Masking Multi-Channel Speech Separation with Soft Time-Frequency Masking Rahil Mahdian Toroghi, Friedrich Faubel, Dietrich Klakow Spoken Language Systems, Saarland University, Saarbrücken, Germany {Rahil.Mahdian,

More information

APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS

APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS AMBISONICS SYMPOSIUM 29 June 2-27, Graz APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS Anton Schlesinger 1, Marinus M. Boone 2 1 University of Technology Delft, The Netherlands (a.schlesinger@tudelft.nl)

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Probabilistic Graphical Models

Probabilistic Graphical Models 2016 Robert Nowak Probabilistic Graphical Models 1 Introduction We have focused mainly on linear models for signals, in particular the subspace model x = Uθ, where U is a n k matrix and θ R k is a vector

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Signal Design for Band-Limited Channels

Signal Design for Band-Limited Channels Wireless Information Transmission System Lab. Signal Design for Band-Limited Channels Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

INF5410 Array signal processing. Ch. 3: Apertures and Arrays

INF5410 Array signal processing. Ch. 3: Apertures and Arrays INF5410 Array signal processing. Ch. 3: Apertures and Arrays Endrias G. Asgedom Department of Informatics, University of Oslo February 2012 Outline Finite Continuous Apetrures Apertures and Arrays Aperture

More information