Microphone-Array Signal Processing

Similar documents
Microphone-Array Signal Processing

Microphone-Array Signal Processing

Speaker Tracking and Beamforming

Overview of Beamforming

Linear Optimum Filtering: Statement

Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation. Bhaskar D Rao University of California, San Diego

EE 224 Signals and Systems I Review 1/10

Beamspace Adaptive Beamforming and the GSC

Signal and systems. Linear Systems. Luigi Palopoli. Signal and systems p. 1/5

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego

ADAPTIVE ANTENNAS. SPATIAL BF

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

CHAPTER 3 ROBUST ADAPTIVE BEAMFORMING

Fourier Series. Spectral Analysis of Periodic Signals

Chapter 5 Frequency Domain Analysis of Systems

Homework 4. May An LTI system has an input, x(t) and output y(t) related through the equation y(t) = t e (t t ) x(t 2)dt

Wavenumber-Frequency Space

Adaptive beamforming. Slide 2: Chapter 7: Adaptive array processing. Slide 3: Delay-and-sum. Slide 4: Delay-and-sum, continued

ENSC327 Communications Systems 2: Fourier Representations. School of Engineering Science Simon Fraser University

LOPE3202: Communication Systems 10/18/2017 2

The Selection of Weighting Functions For Linear Arrays Using Different Techniques

Acoustic Source Separation with Microphone Arrays CCNY

Review of Frequency Domain Fourier Series: Continuous periodic frequency components

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit dwm/courses/2tf

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

2A1H Time-Frequency Analysis II

Fourier Series Example

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

3.2 Complex Sinusoids and Frequency Response of LTI Systems

ELEN 4810 Midterm Exam

Chapter 5 Frequency Domain Analysis of Systems

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process

ON THE NOISE REDUCTION PERFORMANCE OF THE MVDR BEAMFORMER IN NOISY AND REVERBERANT ENVIRONMENTS

Wave Phenomena Physics 15c. Lecture 11 Dispersion

A.1 THE SAMPLED TIME DOMAIN AND THE Z TRANSFORM. 0 δ(t)dt = 1, (A.1) δ(t)dt =

NAME: ht () 1 2π. Hj0 ( ) dω Find the value of BW for the system having the following impulse response.

A complex version of the LASSO algorithm and its application to beamforming

Spectral Analysis of Random Processes

Review of Linear Time-Invariant Network Analysis

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

GATE EE Topic wise Questions SIGNALS & SYSTEMS

Figure 3.1 Effect on frequency spectrum of increasing period T 0. Consider the amplitude spectrum of a periodic waveform as shown in Figure 3.2.

Sinusoidal Modeling. Yannis Stylianou SPCC University of Crete, Computer Science Dept., Greece,

Conventional beamforming

CS-9645 Introduction to Computer Vision Techniques Winter 2018

A. Relationship of DSP to other Fields.

Virtual Array Processing for Active Radar and Sonar Sensing

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

Lecture 3 January 23

ROBUST ADAPTIVE BEAMFORMING BASED ON CO- VARIANCE MATRIX RECONSTRUCTION FOR LOOK DIRECTION MISMATCH

arxiv: v1 [cs.it] 6 Nov 2016

1.1 SPECIAL FUNCTIONS USED IN SIGNAL PROCESSING. δ(t) = for t = 0, = 0 for t 0. δ(t)dt = 1. (1.1)

Convolutive Transfer Function Generalized Sidelobe Canceler

13.42 READING 6: SPECTRUM OF A RANDOM PROCESS 1. STATIONARY AND ERGODIC RANDOM PROCESSES

Atmospheric Flight Dynamics Example Exam 2 Solutions

Convolution. Define a mathematical operation on discrete-time signals called convolution, represented by *. Given two discrete-time signals x 1, x 2,

The Johns Hopkins University Department of Electrical and Computer Engineering Introduction to Linear Systems Fall 2002.

Atmospheric Flight Dynamics Example Exam 1 Solutions

Amplitude and Phase A(0) 2. We start with the Fourier series representation of X(t) in real notation: n=1

Lecture 8: Signal Reconstruction, DT vs CT Processing. 8.1 Reconstruction of a Band-limited Signal from its Samples

Review of Fourier Transform

Broadband Processing by CRC Press LLC

Beamforming. A brief introduction. Brian D. Jeffs Associate Professor Dept. of Electrical and Computer Engineering Brigham Young University

Communication Systems Lecture 21, 22. Dong In Kim School of Information & Comm. Eng. Sungkyunkwan University

Chapter 4 The Fourier Series and Fourier Transform

LTI Systems (Continuous & Discrete) - Basics

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

LECTURE 12 Sections Introduction to the Fourier series of periodic signals

Array Signal Processing

EC Signals and Systems

Discrete Time Signals and Systems Time-frequency Analysis. Gloria Menegaz

Adaptive Array Detection, Estimation and Beamforming

Transform Representation of Signals

Fourier Series and Fourier Transforms

PS403 - Digital Signal processing

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation

EE 3054: Signals, Systems, and Transforms Summer It is observed of some continuous-time LTI system that the input signal.

Chapter 4 The Fourier Series and Fourier Transform

Table 1: Properties of the Continuous-Time Fourier Series. Property Periodic Signal Fourier Series Coefficients

Array Antennas. Chapter 6

Music 270a: Complex Exponentials and Spectrum Representation

Each problem is worth 25 points, and you may solve the problems in any order.

Random Processes Handout IV

Review of Fundamentals of Digital Signal Processing

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

x[n] = x a (nt ) x a (t)e jωt dt while the discrete time signal x[n] has the discrete-time Fourier transform x[n]e jωn

ADAPTIVE FILTER THEORY

4.1 Introduction. 2πδ ω (4.2) Applications of Fourier Representations to Mixed Signal Classes = (4.1)

PROBABILITY AND RANDOM PROCESSESS

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

The Fourier Transform (and more )

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

ESE 531: Digital Signal Processing

Advanced Analog Building Blocks. Prof. Dr. Peter Fischer, Dr. Wei Shen, Dr. Albert Comerma, Dr. Johannes Schemmel, etc

Chapter 5. Fourier Analysis for Discrete-Time Signals and Systems Chapter

Fourier series for continuous and discrete time signals

ECE Spring Prof. David R. Jackson ECE Dept. Notes 32

ANALYSIS OF A PURINA FRACTAL BEAMFORMER. P. Karagiannakis 1, and S. Weiss 1

Transcription:

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 1/115 Microphone-Array Signal Processing José A. Apolinário Jr. and Marcello L. R. de Campos {apolin},{mcampos}@ieee.org IME Lab. Processamento Digital de Sinais

Outline

1. Introduction and Fundamentals Outline

Outline 1. Introduction and Fundamentals 2. Sensor Arrays and Spatial Filtering

Outline 1. Introduction and Fundamentals 2. Sensor Arrays and Spatial Filtering 3. Optimal Beamforming

Outline 1. Introduction and Fundamentals 2. Sensor Arrays and Spatial Filtering 3. Optimal Beamforming 4. Adaptive Beamforming

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 2/115 Outline 1. Introduction and Fundamentals 2. Sensor Arrays and Spatial Filtering 3. Optimal Beamforming 4. Adaptive Beamforming 5. DoA Estimation with Microphone Arrays

1. Introduction and Fundamentals Microphone-Array Signal Processing, c Apolinárioi & Campos p. 3/115

In this course, signals and noise... General concepts

General concepts In this course, signals and noise... have spatial dependence

General concepts In this course, signals and noise... have spatial dependence must be characterized as space-time processes

General concepts In this course, signals and noise... have spatial dependence must be characterized as space-time processes An array of sensors is represented in the figure below.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 4/115 General concepts In this course, signals and noise... have spatial dependence must be characterized as space-time processes An array of sensors is represented in the figure below. signal #1 signal #2 interference SENSORS ARRAY PROCESSOR

1.2 Signals in Space and Time Microphone-Array Signal Processing, c Apolinárioi & Campos p. 5/115

Defining operators Let x ( ) and 2 x( ) be the gradient and Laplacian operators, i.e.,

Defining operators Let x ( ) and 2 x( ) be the gradient and Laplacian operators, i.e., x ( ) = ( ) x [ ( ) = x ı x + ( ) ı y + ( ) y z ( ) y ( ) z ] T ı z

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 6/115 Defining operators Let x ( ) and 2 x( ) be the gradient and Laplacian operators, i.e., and x ( ) = ( ) x [ ( ) = x ı x + ( ) ı y + ( ) y z ( ) y ( ) z ] T 2 x ( ) = 2 ( ) x + 2 ( ) 2 y + 2 ( ) 2 z 2 respectively. ı z

Wave equation From Maxwell s equations, 2 E = 1 c 2 2 E t 2

Wave equation From Maxwell s equations, 2 E = 1 c 2 2 E t 2 or, for s(x,t) a general scalar field, 2 s x + 2 s 2 y + 2 s 2 z = 1 2 s 2 c 2 t 2 where: c is the propagation speed, E is the electric field intensity, and x = [x y z] T is a position vector.

Wave equation From Maxwell s equations, 2 E = 1 c 2 2 E t 2 vector wave equation or, for s(x,t) a general scalar field, 2 s x + 2 s 2 y + 2 s 2 z = 1 2 s 2 c 2 t 2 where: c is the propagation speed, E is the electric field intensity, and x = [x y z] T is a position vector.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 7/115 Wave equation From Maxwell s equations, 2 E = 1 c 2 2 E t 2 vector wave equation or, for s(x,t) a general scalar field, 2 s x + 2 s 2 y + 2 s 2 z = 1 2 s 2 c 2 t 2 scalar wave equation where: c is the propagation speed, E is the electric field intensity, and x = [x y z] T is a position vector. Note: From this point onwards the terms wave and field will be used interchangeably.

Monochromatic plane wave Now assume s(x, t) has a complex exponential form,

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 8/115 Monochromatic plane wave Now assume s(x, t) has a complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) where A is a complex constant and k x, k y, k z, and ω 0 are real constants.

Monochromatic plane wave Substituting the complex exponential form of s(x, t) into the wave equation, we have

Monochromatic plane wave Substituting the complex exponential form of s(x, t) into the wave equation, we have k 2 xs(x,t)+k 2 ys(x,t)+k 2 zs(x,t) = 1 c 2ω2 s(x,t)

Monochromatic plane wave Substituting the complex exponential form of s(x, t) into the wave equation, we have k 2 xs(x,t)+k 2 ys(x,t)+k 2 zs(x,t) = 1 c 2ω2 s(x,t) or, after canceling s(x, t),

Monochromatic plane wave Substituting the complex exponential form of s(x, t) into the wave equation, we have k 2 xs(x,t)+k 2 ys(x,t)+k 2 zs(x,t) = 1 c 2ω2 s(x,t) or, after canceling s(x, t), k 2 x +k 2 y +k 2 z = 1 c 2ω2

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 9/115 Monochromatic plane wave Substituting the complex exponential form of s(x, t) into the wave equation, we have k 2 xs(x,t)+k 2 ys(x,t)+k 2 zs(x,t) = 1 c 2ω2 s(x,t) or, after canceling s(x, t), k 2 x +k 2 y +k 2 z = 1 c 2ω2 constraints to be satisfied by the parameters of the scalar field

Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is

Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic

Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic plane

Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic plane

Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic plane For example, take the position at the origin of the coordinate space:

Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic plane For example, take the position at the origin of the coordinate space: x = [0 0 0] T

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 10/115 Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic plane For example, take the position at the origin of the coordinate space: x = [0 0 0] T s(0,t) = Ae jωt

Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic plane

Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic plane The value of s(x,t) is the same for all points lying on the plane

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 11/115 Monochromatic plane wave From the constraints imposed by the complex exponential form, s(x,t) = Ae j(ωt k xx k y y k z z) is monochromatic plane The value of s(x,t) is the same for all points lying on the plane k x x+k y y +k z z = C where C is a constant.

Monochromatic plane wave Defining the wavenumber vector k as k = [k x k y k z ] T

Monochromatic plane wave Defining the wavenumber vector k as k = [k x k y k z ] T we can rewrite the equation for the monochromatic plane wave as s(x,t) = Ae j(ωt kt x)

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 12/115 Monochromatic plane wave Defining the wavenumber vector k as k = [k x k y k z ] T we can rewrite the equation for the monochromatic plane wave as s(x,t) = Ae j(ωt kt x) The planes where s(x,t) is constant are perpendicular to the wavenumber vector k

Monochromatic plane wave As the plane wave propagates, it advances a distance δx in δt seconds.

Monochromatic plane wave As the plane wave propagates, it advances a distance δx in δt seconds. Therefore, s(x,t) = s(x+δx,t+δt) Ae j(ωt kt x) = Ae j [ω(t+δt) k T (x+δx)]

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 13/115 Monochromatic plane wave As the plane wave propagates, it advances a distance δx in δt seconds. Therefore, s(x,t) = s(x+δx,t+δt) Ae j(ωt kt x) = Ae j [ω(t+δt) k T (x+δx)] = ωδt k T δx = 0

Monochromatic plane wave Naturally the plane wave propagates in the direction of the wavenumber vector, i.e.,

Monochromatic plane wave Naturally the plane wave propagates in the direction of the wavenumber vector, i.e., k and δx point in the same direction.

Monochromatic plane wave Naturally the plane wave propagates in the direction of the wavenumber vector, i.e., k and δx point in the same direction. Therefore, k T δx = k δx

Monochromatic plane wave Naturally the plane wave propagates in the direction of the wavenumber vector, i.e., k and δx point in the same direction. Therefore, k T δx = k δx = ωδt = k δx

Monochromatic plane wave Naturally the plane wave propagates in the direction of the wavenumber vector, i.e., k and δx point in the same direction. Therefore, k T δx = k δx = ωδt = k δx or, equivalently,

Monochromatic plane wave Naturally the plane wave propagates in the direction of the wavenumber vector, i.e., k and δx point in the same direction. Therefore, k T δx = k δx = ωδt = k δx or, equivalently, δx δt = ω k

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 14/115 Monochromatic plane wave Naturally the plane wave propagates in the direction of the wavenumber vector, i.e., k and δx point in the same direction. Therefore, k T δx = k δx = ωδt = k δx or, equivalently, Remember the constraints? k 2 = ω 2 /c 2 δx δt = ω k

Monochromatic plane wave After T = 2π/ω seconds, the plane wave has completed one cycle and it appears as it did before, but its wavefront has advanced a distance of one wavelength, λ.

Monochromatic plane wave After T = 2π/ω seconds, the plane wave has completed one cycle and it appears as it did before, but its wavefront has advanced a distance of one wavelength, λ. For δx = λ and δt = T = 2π ω, T = λ k ω = λ = 2π k

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 15/115 Monochromatic plane wave After T = 2π/ω seconds, the plane wave has completed one cycle and it appears as it did before, but its wavefront has advanced a distance of one wavelength, λ. For δx = λ and δt = T = 2π ω, T = λ k ω = λ = 2π k The wavenumber vector, k, may be considered a spatial frequency variable, just as ω is a temporal frequency variable.

Monochromatic plane wave We may rewrite the wave equation as s(x,t) = Ae j(ωt kt x) = Ae jω(t αt x) where α = k/ω is the slowness vector.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 16/115 Monochromatic plane wave We may rewrite the wave equation as s(x,t) = Ae j(ωt kt x) = Ae jω(t αt x) where α = k/ω is the slowness vector. As c = ω/ k, vector α has a magnitude which is the reciprocal of c.

Periodic propagating periodic waves Any arbitrary periodic waveform s(x,t) = s(t α T x) with fundamental period ω 0 can be represented as a sum: s(x,t) = s(t α T x) = n= S n e jnω 0(t α T x)

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 17/115 Periodic propagating periodic waves Any arbitrary periodic waveform s(x,t) = s(t α T x) with fundamental period ω 0 can be represented as a sum: s(x,t) = s(t α T x) = n= S n e jnω 0(t α T x) The coefficients are given by S n = 1 T T 0 s(u)e jnω 0u du

Periodic propagating periodic waves Based on the previous derivations, we observe that:

Periodic propagating periodic waves Based on the previous derivations, we observe that: The various components of s(x, t) have different frequencies ω = nω 0 and different wavenumber vectors, k.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 18/115 Periodic propagating periodic waves Based on the previous derivations, we observe that: The various components of s(x, t) have different frequencies ω = nω 0 and different wavenumber vectors, k. The waveform propagates in the direction of the slowness vector α = k/ω.

Nonperiodic propagating waves More generally, any function constructed as the integral of complex exponentials who also have a defined and converged Fourier transform can represent a waveform

Nonperiodic propagating waves More generally, any function constructed as the integral of complex exponentials who also have a defined and converged Fourier transform can represent a waveform s(x,t) = s(t α T x) = 1 2π S(ω)e jω(t αt x) dω

Nonperiodic propagating waves More generally, any function constructed as the integral of complex exponentials who also have a defined and converged Fourier transform can represent a waveform where s(x,t) = s(t α T x) = 1 2π S(ω)e jω(t αt x) dω S(ω) = s(u)e jωu du

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 19/115 Nonperiodic propagating waves More generally, any function constructed as the integral of complex exponentials who also have a defined and converged Fourier transform can represent a waveform where s(x,t) = s(t α T x) = 1 2π S(ω)e jω(t αt x) dω S(ω) = s(u)e jωu du We will come back to this later...

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 20/115 Outline 1. Introduction and Fundamentals 2. Sensor Arrays and Spatial Filtering 3. Optimal Beamforming 4. Adaptive Beamforming 5. DoA Estimation with Microphone Arrays

2. Sensor Arrays and Spatial Filtering Microphone-Array Signal Processing, c Apolinárioi & Campos p. 21/115

2.1 Wavenumber-Frequency Space Microphone-Array Signal Processing, c Apolinárioi & Campos p. 22/115

Space-time Fourier Transform The four-dimensional Fourier transform of the space-time signal s(x,t) is given by

Space-time Fourier Transform The four-dimensional Fourier transform of the space-time signal s(x,t) is given by S(k,ω) = s(x,t)e j(ωt kt x) dxdt

Space-time Fourier Transform The four-dimensional Fourier transform of the space-time signal s(x,t) is given by S(k,ω) = s(x,t)e j(ωt kt x) dxdt temporal frequency

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 23/115 Space-time Fourier Transform The four-dimensional Fourier transform of the space-time signal s(x,t) is given by S(k,ω) = s(x,t)e j(ωt kt x) dxdt temporal frequency spatial frequency: wavenumber

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 24/115 Space-time Fourier Transform The four-dimensional Fourier transform of the space-time signal s(x,t) is given by S(k,ω) = s(x,t)e j(ωt kt x) dx dt s(x,t) = 1 (2π) 4 S(k,ω)e j(ωt kt x) dk dω

Space-time Fourier Transform We have already concluded that if the space-time signal is a propagating waveform such that s(x,t) = s(t α T 0x), then its Fourier transform is equal to S(k,ω) = S(ω)δ(k ωα 0 )

Space-time Fourier Transform We have already concluded that if the space-time signal is a propagating waveform such that s(x,t) = s(t α T 0x), then its Fourier transform is equal to S(k,ω) = S(ω)δ(k ωα 0 ) Remember the nonperiodic propagating wave Fourier transform?

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 25/115 Space-time Fourier Transform We have already concluded that if the space-time signal is a propagating waveform such that s(x,t) = s(t α T 0x), then its Fourier transform is equal to S(k,ω) = S(ω)δ(k ωα 0 ) Remember the nonperiodic propagating wave Fourier transform? This means that s(x,t) only has energy along the direction of k = k 0 = ωα 0 in the wavenumber-frequency space.

2.2 Frequency-Wavenumber (WN) Response and Beam patterns (BP) Microphone-Array Signal Processing, c Apolinárioi & Campos p. 26/115

Signals at the sensors An array is a set of N (isotropic) sensors located at positions p n,n = 0,1,,N 1

Signals at the sensors An array is a set of N (isotropic) sensors located at positions p n,n = 0,1,,N 1 The sensors spatially sample the signal field at locations p n

Signals at the sensors An array is a set of N (isotropic) sensors located at positions p n,n = 0,1,,N 1 The sensors spatially sample the signal field at locations p n At the sensors, the set of N signals are denoted by

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 27/115 Signals at the sensors An array is a set of N (isotropic) sensors located at positions p n,n = 0,1,,N 1 The sensors spatially sample the signal field at locations p n At the sensors, the set of N signals are denoted by f(t,p) = f(t,p 0 ) f(t,p 1 ). f(t,p N 1 )

Array output f (t, p ) 0 h (t) 0 f (t, p 1 ) h (t) 1 Σ.... y(t) f (t, p N 1 ) h (t) N 1

Array output f (t, p ) 0 h (t) 0 f (t, p 1 ) h (t) 1 Σ.... y(t) f (t, p N 1 ) h (t) N 1 y(t) = = N 1 n=0 h n (t τ)f n (τ,p n )dτ h T (t τ)f(τ,p)dτ

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 28/115 Array output f (t, p ) 0 h (t) 0 f (t, p 1 ) h (t) 1 Σ.... y(t) f (t, p N 1 ) h (t) N 1 y(t) = = N 1 n=0 h n (t τ)f n (τ,p n )dτ h T (t τ)f(τ,p)dτ where h(t) = [h o (t) h 1 (t) h N 1 (t)] T

In the frequency domain, Y(ω) = y(t)e jωt dt = H T (ω)f(ω)

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 29/115 In the frequency domain, Y(ω) = where H(ω) = y(t)e jωt dt = H T (ω)f(ω) F(ω) = F(ω,p) = h(t)e jωt dt f(t,p)e jωt dt

Plane wave propagating Consider a plane wave propagating in the direction of vector a: sinθcosφ a = sinθsinφ cosθ

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 30/115 Plane wave propagating Consider a plane wave propagating in the direction of vector a: sinθcosφ a = sinθsinφ cosθ If f(t) is the signal that would be received at the origin, then: f(t τ 0 ) f(t τ 1 ) f(t,p) =. f(t τ N 1 )

Plane wave (assuming φ = 90 o ) sinθ z θ 1 y z a cos θ plane wave p n α u e y a

Plane wave (assuming φ = 90 o ) z sinθ a z θ 1 cos θ y e =? plane wave p n α u e y a

Plane wave (assuming φ = 90 o ) z sinθ a z θ 1 cos θ y e =? e = cτ n plane wave p n α u e y a

Plane wave (assuming φ = 90 o ) z sinθ a z θ 1 cos θ y e =? e = cτ n plane wave τ n = e c p n α u e y a

Plane wave (assuming φ = 90 o ) z sinθ a z θ 1 cos θ y e =? e = cτ n plane wave τ n = e c BUT p n α u e y a

Plane wave (assuming φ = 90 o ) z sinθ a z θ 1 cos θ y e =? e = cτ n plane wave τ n = e c BUT p n α e = p n cos(α) = u p n cos(α) }{{} =1 u e y a

Plane wave (assuming φ = 90 o ) z sinθ a z θ 1 cos θ y e =? e = cτ n plane wave τ n = e c BUT p n α e = p n cos(α) = u p n cos(α) }{{} =1 u e y a τ n = ut p n c = at p n c

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 31/115 Plane wave (assuming φ = 90 o ) z sinθ a z θ 1 cos θ y e =? e = cτ n plane wave τ n = e c BUT p n α e = p n cos(α) = u p n cos(α) }{{} =1 u e y a τ n = ut p n c = at p n c is the time since the plane wave hits the sensor at location p n until it reaches point (0,0).

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 32/115 Back to the frequency domain Then, we have: F(ω) = = e jωt f(t τ 0 )dt e jωt f(t τ 1 )dt. e jωt f(t τ N 1 )dt e jωτ 0 e jωτ 1. F(ω) e jωτ N 1

Definition of Wavenumber For plane waves propagating in a locally homogeneous medium: k = ω c a = 2π c/f a = 2π λ a = 2π λ u

Definition of Wavenumber For plane waves propagating in a locally homogeneous medium: k = ω c a = 2π c/f a = 2π λ a = 2π λ u Wavenumber Vector ("spatial frequency")

Definition of Wavenumber For plane waves propagating in a locally homogeneous medium: k = ω c a = 2π c/f a = 2π λ a = 2π λ u Note that k = 2π λ Wavenumber Vector ("spatial frequency")

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 33/115 Definition of Wavenumber For plane waves propagating in a locally homogeneous medium: k = ω c a = 2π c/f a = 2π λ a = 2π λ u Note that k = 2π λ Therefore Wavenumber Vector ("spatial frequency") ωτ n = ω c at p n = k T p n

And we have F(ω) = e jkt p 0 e jkt p 1. e jkt p N 1 Array Manifold Vector F(ω) = F(ω)v k (k)

Array Manifold Vector And we have F(ω) = e jkt p 0 e jkt p 1. F(ω) = F(ω)v k (k) e jkt p N 1 Array Manifold Vector

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 34/115 Array Manifold Vector And we have F(ω) = e jkt p 0 e jkt p 1. F(ω) = F(ω)v k (k) e jkt p N 1 Array Manifold Vector In this particular example, we can use h n (t) = 1 N δ(t+τ n) such that y(t) = f(t) Following, we have the delay-and-sum beamformer.

Delay-and-sum Beamformer f (t τ ) 0 +τ 0 f (t τ ) 1 +τ 1.... Σ 1 N y(t) f (t τ ) N 1 +τ N 1

Delay-and-sum Beamformer f (t τ ) 0 +τ 0 f (t τ ) 1 +τ 1.... Σ 1 N y(t) f (t τ ) N 1 +τ N 1 A common delay is added in each channel to make the operations physically realizable

Delay-and-sum Beamformer f (t τ ) 0 +τ 0 f (t τ ) 1 +τ 1.... Σ 1 N y(t) f (t τ ) N 1 +τ N 1 A common delay is added in each channel to make the operations physically realizable Since F {h n (t)} = F { 1 N δ(t+τ n) } = e jωτ n

Delay-and-sum Beamformer f (t τ ) 0 +τ 0 f (t τ ) 1 +τ 1.... Σ 1 N y(t) f (t τ ) N 1 +τ N 1 A common delay is added in each channel to make the operations physically realizable Since F {h n (t)} = F { 1 N δ(t+τ n) } = e jωτ n We can write H T (ω) = 1 N vh k (k)

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 35/115 Delay-and-sum Beamformer f (t τ ) 0 +τ 0 f (t τ ) 1 +τ 1.... Σ 1 N y(t) f (t τ ) N 1 +τ N 1 A common delay is added in each channel to make the operations physically realizable Since F {h n (t)} = F { 1 N δ(t+τ n) } = e jωτ n We can write H T (ω) = 1 N vh k (k) Array Manifold Vector

e jωt h(t) H(ω)e jωt LTI System

LTI System e jωt h(t) H(ω)e jωt Space-time signals (base functions): f n (t,p) = e jω(t τ n) = e j(ωt kt p n ) Note that ωτ n = k T p n

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 36/115 LTI System e jωt h(t) H(ω)e jωt Space-time signals (base functions): f n (t,p) = e jω(t τ n) = e j(ωt kt p n ) f(t,p) = e jωt v k (k) Note that ωτ n = k T p n

Frequency-Wavenumber Response Function The response of the array to this plane wave is: y(t,k) = H T (ω)v k (k)e jωt

Frequency-Wavenumber Response Function The response of the array to this plane wave is: y(t,k) = H T (ω)v k (k)e jωt After taking the Fourier transform, we have: Y(ω,k) = H T (ω)v k (k)

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 37/115 Frequency-Wavenumber Response Function The response of the array to this plane wave is: y(t,k) = H T (ω)v k (k)e jωt After taking the Fourier transform, we have: Y(ω,k) = H T (ω)v k (k) And we define the Frequency-Wavenumber Response Function: Upsilon Υ(ω,k) H T (ω)v k (k) Υ(ω,k) describes the complex gain of an array to an input plane wave with wavenumber k and temporal frequency ω.

Beam Pattern and Bandpass Signal BEAM PATTERN is the Frequency Wavenumber Response Function evaluated versus the direction: B(ω : θ,φ) = Υ(ω,k) Note that k = 2π a(θ,φ), and a is the unit vector with λ spherical coordinates angles θ and φ

Beam Pattern and Bandpass Signal BEAM PATTERN is the Frequency Wavenumber Response Function evaluated versus the direction: B(ω : θ,φ) = Υ(ω,k) Note that k = 2π a(θ,φ), and a is the unit vector with λ spherical coordinates angles θ and φ Let s write a bandpass signal: f(t,p n ) = 2Re{ f(t,p n )e jω ct },n = 0,1,,N 1

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 38/115 Beam Pattern and Bandpass Signal BEAM PATTERN is the Frequency Wavenumber Response Function evaluated versus the direction: B(ω : θ,φ) = Υ(ω,k) Note that k = 2π a(θ,φ), and a is the unit vector with λ spherical coordinates angles θ and φ Let s write a bandpass signal: f(t,p n ) = 2Re{ f(t,p n )e jω ct },n = 0,1,,N 1 2πB s ω c ω c corresponds to the carrier frequency and the complex envelope f(t,p n ) is bandlimited to the region ω ω c }{{} ω L 2πB s /2

Bandlimited and Narrowband Signals Bandlimited plane wave: f(t,p n ) = 2Re{ f(t τ n )e jω c(t τ n ) },n = 0,1,,N 1

Bandlimited and Narrowband Signals Bandlimited plane wave: f(t,p n ) = 2Re{ f(t τ n )e jω c(t τ n ) },n = 0,1,,N 1 Maximum travel time ( T max ) across the (linear) array: travel time between the two sensors at the extremities (signal arriving along the end-fire)

Bandlimited and Narrowband Signals Bandlimited plane wave: f(t,p n ) = 2Re{ f(t τ n )e jω c(t τ n ) },n = 0,1,,N 1 Maximum travel time ( T max ) across the (linear) array: travel time between the two sensors at the extremities (signal arriving along the end-fire) Assuming the origin is at the array s center of gravity: N 1 n=0 p n = 0 τ n T max

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 39/115 Bandlimited and Narrowband Signals Bandlimited plane wave: f(t,p n ) = 2Re{ f(t τ n )e jω c(t τ n ) },n = 0,1,,N 1 Maximum travel time ( T max ) across the (linear) array: travel time between the two sensors at the extremities (signal arriving along the end-fire) Assuming the origin is at the array s center of gravity: N 1 n=0 p n = 0 τ n T max In Narrowband (NB) signals, B s T max 1 f(t τ n ) f(t) and f(t,p n ) = 2Re{ f(t)e jω cτ n e jω ct }

For NB signals, the delay is approximated by a phase-shift: delay&sum beamformer PHASED ARRAY Phased-Array

For NB signals, the delay is approximated by a phase-shift: delay&sum beamformer PHASED ARRAY Phased-Array f (t τ ) 0 j 0 0 e + ω τ f(t) f (t τ ) 1 j 0 1 e + ω τ. f(t)... Σ 1 N y(t) f (t τ ) N 1 j 0 N 1 e + ω τ f(t)

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 40/115 For NB signals, the delay is approximated by a phase-shift: delay&sum beamformer PHASED ARRAY Phased-Array f (t τ ) 0 j 0 0 e + ω τ f(t) f (t τ ) 1 j 0 1 e + ω τ. f(t)... Σ 1 N y(t) f (t τ ) N 1 j 0 N 1 e + ω τ f(t) The phased array can be implemented adjusting the gain and phase to achieve a desired beam pattern

NB Beamformers In narrowband beamformers: y(t,k) = w H v k (k)e jωt

NB Beamformers In narrowband beamformers: y(t,k) = w H v k (k)e jωt f (t τ ) 0 w * 0 y (t) 0 f (t τ ) 1 w * 1 y (t) 1... Σ y(t). f (t τ ) N 1 w * N 1 y (t) N 1

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 41/115 NB Beamformers In narrowband beamformers: y(t,k) = w H v k (k)e jωt f (t τ ) 0 w * 0 y (t) 0 f (t τ ) 1 w * 1 y (t) 1... Σ y(t). f (t τ ) N 1 w * N 1 y (t) N 1 Υ(ω,k) = }{{} w H v k (k) H T (ω)

2.3 Uniform Linear Arrays (ULA) Microphone-Array Signal Processing, c Apolinárioi & Campos p. 42/115

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 43/115 Uniformly Spaced Linear Arrays z (array axis = "endfire") (grazing angle) d θ r _ θ = π _ θ 2 (broadside angle) y φ x

z An ULA along axis z: N 1 (polar angle) θ ULA d y x 1 0 φ (azimuth angle)

z An ULA along axis z: N 1 (polar angle) θ ULA d y x 1 0 φ (azimuth angle) Location of the elements: { pzn = (n N 1 )d, for n = 0,1,,N 1 2 p xn = p yn = 0

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 44/115 z An ULA along axis z: N 1 (polar angle) θ ULA d y x 1 0 φ (azimuth angle) Location of the elements: { pzn = (n N 1 )d, for n = 0,1,,N 1 2 p xn = p yn = 0 Therefore, p n = 0 0 (n N 1)d 2

ULA Array manifold vector: v k (k) = e jkt p 0. e jkt p n. e jkt p N 1 [k x k y k z ] 0 0 [ n N 1 2 ] d

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 45/115 ULA Array manifold vector: v k (k) = [e jkt p 0 e jkt p 1 e jkt p N 1 ] T v k (k) = v k (k z ) = (N 1) +j k e 2 z d e +j(n 1 2 1)k zd. e j(n 1 2 )k zd

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 46/115 Outline 1. Introduction and Fundamentals 2. Sensor Arrays and Spatial Filtering 3. Optimal Beamforming 4. Adaptive Beamforming 5. DoA Estimation with Microphone Arrays

3. Optimal Beamforming Microphone-Array Signal Processing, c Apolinárioi & Campos p. 47/115

3.1 Introduction Microphone-Array Signal Processing, c Apolinárioi & Campos p. 48/115

Introduction Scope: use the statistical representation of signal and noise to design array processors that are optimal in a statistical sense.

Introduction Scope: use the statistical representation of signal and noise to design array processors that are optimal in a statistical sense. We assume that the appropriate statistics are known.

Introduction Scope: use the statistical representation of signal and noise to design array processors that are optimal in a statistical sense. We assume that the appropriate statistics are known. Our objective of interest is to estimate the waveform of a plane-wave impinging on the array in the presence of noise and interfering signals.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 49/115 Introduction Scope: use the statistical representation of signal and noise to design array processors that are optimal in a statistical sense. We assume that the appropriate statistics are known. Our objective of interest is to estimate the waveform of a plane-wave impinging on the array in the presence of noise and interfering signals. Even if a particular beamformer developed in this chapter has good performance, it does not guarantee that its adaptive version (next chapter) will. However, if the performance is poor, it is unlikely that the adaptive version will be useful.

3.2 Optimal Beamformers Microphone-Array Signal Processing, c Apolinárioi & Campos p. 50/115

Snapshot model in the frequency domain: MVDR Beamformer

Snapshot model in the frequency domain: MVDR Beamformer In many applications, we implement a beamforming in the frequency domain (ω m = ω c +m 2π and M varies T from M 1 to M 1 if odd and from M to M 1 if 2 2 2 2 even). X ( ω T (M 1)/2,k) NB Beamformer Y T ( ω,k) (M 1)/2 X(t) Fourier Transform at M Frequencies X T ( ω m,k ) NB Beamformer Y ( T ω m,k ) Inverse Discrete y(t) Fourier Transform X T ( ω,k) (M 1)/2 NB Beamformer Y ( ω,k) T (M 1)/2

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 51/115 Snapshot model in the frequency domain: MVDR Beamformer In many applications, we implement a beamforming in the frequency domain (ω m = ω c +m 2π and M varies T from M 1 to M 1 if odd and from M to M 1 if 2 2 2 2 even). X ( ω T (M 1)/2,k) NB Beamformer Y T ( ω,k) (M 1)/2 X(t) Fourier Transform at M Frequencies X T ( ω m,k ) NB Beamformer Y ( T ω m,k ) Inverse Discrete y(t) Fourier Transform X T ( ω,k) (M 1)/2 NB Beamformer Y ( ω,k) T (M 1)/2 In order to generate these vectors, divide the observation interval T in K disjoint intervals of duration T : (k 1) T t < k T,k = 1,,K.

MVDR Beamformer T must be significantly greater than the propagation time across the array.

MVDR Beamformer T must be significantly greater than the propagation time across the array. T also depends on the bandwidth of the input signal.

MVDR Beamformer T must be significantly greater than the propagation time across the array. T also depends on the bandwidth of the input signal. Assume an input signal with BW B s centered in f c

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 52/115 MVDR Beamformer T must be significantly greater than the propagation time across the array. T also depends on the bandwidth of the input signal. Assume an input signal with BW B s centered in f c In order to develop the frequency-domain snapshot model for the case in which the desired signals and the interfering signals can de modeled as plane waves, we have two cases: desired signals are deterministic or samples of a random process.

MVDR Beamformer Let s assume the case where the signal is nonrandom but unknown; we initially consider the case of single plane-wave signal.

MVDR Beamformer Let s assume the case where the signal is nonrandom but unknown; we initially consider the case of single plane-wave signal. Frequency-domain snapshot consists of signal plus noise: X(ω) = X s (ω)+n(ω)

MVDR Beamformer Let s assume the case where the signal is nonrandom but unknown; we initially consider the case of single plane-wave signal. Frequency-domain snapshot consists of signal plus noise: X(ω) = X s (ω)+n(ω) The signal vector can be written as X s (ω) = F(ω)v(ω : k s ) where F(ω) is the frequency-domain snapshot of the source signal and v(ω : k s ) is the array manifold vector for a plane-wave with wavenumber k s.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 53/115 MVDR Beamformer Let s assume the case where the signal is nonrandom but unknown; we initially consider the case of single plane-wave signal. Frequency-domain snapshot consists of signal plus noise: X(ω) = X s (ω)+n(ω) The signal vector can be written as X s (ω) = F(ω)v(ω : k s ) where F(ω) is the frequency-domain snapshot of the source signal and v(ω : k s ) is the array manifold vector for a plane-wave with wavenumber k s. The noise snapshot is a zero-mean random vector N(ω) with spectral matrix given by Sn(ω) = Sc(ω)+σ 2 ω I

MVDR Beamformer We process X(ω) with the 1 N operator W H (ω): X ( ω ) ( ω) ( ω) W H Y

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 54/115 MVDR Beamformer We process X(ω) with the 1 N operator W H (ω): X ( ω ) ( ω) ( ω) W H Y Distortionless criterion (in the absence of noise): Y(ω) = F(ω) = W H (ω)x s (ω) = F(ω)W H (ω)v(ω : k s ) = W H (ω)v(ω : k s ) = 1

MVDR Beamformer In the presence of noise, we have: Y(ω) = F(ω)+Y n (ω)

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 55/115 MVDR Beamformer In the presence of noise, we have: Y(ω) = F(ω)+Y n (ω) The mean square of the output noise is: E[ Y n (ω) 2 ] = W H (ω)s n (ω)w(ω)

MVDR Beamformer In the MVDR beamformer, we want to minimize E[ Y n (ω) 2 ] subject to W H (ω)v(ω : k s ) = 1

MVDR Beamformer In the MVDR beamformer, we want to minimize E[ Y n (ω) 2 ] subject to W H (ω)v(ω : k s ) = 1 Using the method of Lagrange multipliers, we define the following cost function to be minimized F = W H (ω)s n (ω)wω +λ [ W H (ω)v(ω : k s ) 1 ] +λ [ v H (ω : k s )W(ω) 1 ]

MVDR Beamformer In the MVDR beamformer, we want to minimize E[ Y n (ω) 2 ] subject to W H (ω)v(ω : k s ) = 1 Using the method of Lagrange multipliers, we define the following cost function to be minimized F = W H (ω)s n (ω)wω +λ [ W H (ω)v(ω : k s ) 1 ] +λ [ v H (ω : k s )W(ω) 1 ]...and the result (suppressing ω and k s ) is W H mvdr = Λ s v H S 1 n where Λ s = [ v H S 1 n v ] 1

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 56/115 MVDR Beamformer In the MVDR beamformer, we want to minimize E[ Y n (ω) 2 ] subject to W H (ω)v(ω : k s ) = 1 Using the method of Lagrange multipliers, we define the following cost function to be minimized F = W H (ω)s n (ω)wω +λ [ W H (ω)v(ω : k s ) 1 ] +λ [ v H (ω : k s )W(ω) 1 ]...and the result (suppressing ω and k s ) is W H mvdr = Λ s v H S 1 n where Λ s = [ v H S 1 n v ] 1 This result is referred to as MVDR or Capon Beamformer.

Constrained Optimal Filtering The gradient of ξ with respect to w (real case): wξ = ξ w 0 ξ w 1. ξ w N 1

Constrained Optimal Filtering The gradient of ξ with respect to w (real case): wξ = ξ w 0 ξ w 1. ξ w N 1 From the definition above, it is easy to show that: w(b T w) = w(w T b) = b

Constrained Optimal Filtering The gradient of ξ with respect to w (real case): wξ = ξ w 0 ξ w 1. ξ w N 1 From the definition above, it is easy to show that: w(b T w) = w(w T b) = b Also w(w T Rw) = R T w+rw

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 57/115 Constrained Optimal Filtering The gradient of ξ with respect to w (real case): wξ = ξ w 0 ξ w 1. ξ w N 1 From the definition above, it is easy to show that: w(b T w) = w(w T b) = b Also w(w T Rw) = R T w+rw which, when R is symmetric, leads to w(w T Rw) = 2Rw

Constrained Optimal Filtering We now assume the complex case w = a+jb.

Constrained Optimal Filtering We now assume the complex case w = a+jb. The gradient becomes w ξ = ξ a 0 +j ξ b 0 ξ a 1 +j ξ b 1. ξ ξ a N 1 +j b N 1

Constrained Optimal Filtering We now assume the complex case w = a+jb. The gradient becomes w ξ = ξ a 0 +j ξ b 0 ξ a 1 +j ξ b 1. ξ ξ a N 1 +j b N 1 which corresponds to wξ = aξ +j b ξ

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 58/115 Constrained Optimal Filtering We now assume the complex case w = a+jb. The gradient becomes w ξ = ξ a 0 +j ξ b 0 ξ a 1 +j ξ b 1. ξ ξ a N 1 +j b N 1 which corresponds to wξ = aξ +j b ξ Let us define the derivative a 0 j b 0 = 1 a w 2 1 j b 1. a N 1 j b N 1 w (with respect to w):

Constrained Optimal Filtering The conjugate derivative with respect to w is a 0 +j b 0 = 1 a 1 +j b 1 w 2. a N 1 +j b N 1

Constrained Optimal Filtering The conjugate derivative with respect to w is a 0 +j b 0 = 1 a 1 +j b 1 w 2. a N 1 +j b N 1 Therefore, w ξ = a ξ +j b ξ is equivalent to 2 ξ w.

Constrained Optimal Filtering The conjugate derivative with respect to w is a 0 +j b 0 = 1 a 1 +j b 1 w 2. a N 1 +j b N 1 Therefore, w ξ = a ξ +j b ξ is equivalent to 2 ξ w. The complex gradient may be slightly tricky if compared to the simple real gradient. For this reason, we exemplify the use of the complex gradient by calculating w E[ e(k) 2 ].

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 59/115 Constrained Optimal Filtering The conjugate derivative with respect to w is a 0 +j b 0 = 1 a 1 +j b 1 w 2. a N 1 +j b N 1 Therefore, w ξ = a ξ +j b ξ is equivalent to 2 ξ w. The complex gradient may be slightly tricky if compared to the simple real gradient. For this reason, we exemplify the use of the complex gradient by calculating w E[ e(k) 2 ]. w E[e(k)e (k)] = E{e (k)[ w e(k)]+e(k)[ w e (k)]}

Constrained Optimal Filtering We compute each gradient... w e(k) = a [d(k) w H x(k)]+j b [d(k) w H x(k)] = x(k) x(k) = 2x(k)

Constrained Optimal Filtering We compute each gradient... w e(k) = a [d(k) w H x(k)]+j b [d(k) w H x(k)] = x(k) x(k) = 2x(k) and w e (k) = a [d (k) w T x (k)]+j b [d (k) w T x (k)] = x (k)+x (k) = 0

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 60/115 We compute each gradient... Constrained Optimal Filtering w e(k) = a [d(k) w H x(k)]+j b [d(k) w H x(k)] and = x(k) x(k) = 2x(k) w e (k) = a [d (k) w T x (k)]+j b [d (k) w T x (k)] = x (k)+x (k) = 0 such that the final result is w E[e(k)e (k)] = 2E[e (k)x(k)] = 2E[x(k)[d(k) w H x(k)] } = 2E[x(k)d (k)] +2E[x(k)x H (k)] w }{{}}{{} p R

Constrained Optimal Filtering Which results in the Wiener solution w = R 1 p.

Constrained Optimal Filtering Which results in the Wiener solution w = R 1 p. When a set of linear constraints involving the coefficient vector of an adaptive filter is imposed, the resulting problem (LCAF) admitting the MSE as the objective function can be stated as minimizing E[ e(k) 2 ] subject to C H w = f.

Constrained Optimal Filtering Which results in the Wiener solution w = R 1 p. When a set of linear constraints involving the coefficient vector of an adaptive filter is imposed, the resulting problem (LCAF) admitting the MSE as the objective function can be stated as minimizing E[ e(k) 2 ] subject to C H w = f. The output of the processor is y(k) = w H x(k).

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 61/115 Constrained Optimal Filtering Which results in the Wiener solution w = R 1 p. When a set of linear constraints involving the coefficient vector of an adaptive filter is imposed, the resulting problem (LCAF) admitting the MSE as the objective function can be stated as minimizing E[ e(k) 2 ] subject to C H w = f. The output of the processor is y(k) = w H x(k). It is worth mentioning that the most general case corresponds to having a reference signal, d(k). It is, however, usual to have no reference signal as in Linearly-Constrained Minimum-Variance (LCMV) applications. In LCMV, if f = 1, the system is often referred to as Minimum-Variance Distortionless Response (MVDR).

Using Lagrange multipliers, we form Constrained Optimal Filtering ξ(k) = E[e(k)e (k)]+l T R Re[CH w f]+l T I Im[CH w f]

Constrained Optimal Filtering Using Lagrange multipliers, we form ξ(k) = E[e(k)e (k)]+l T R Re[CH w f]+l T I Im[CH w f] We can also represent the above expression with a complex L given by L R +jl I such that ξ(k) = E[e(k)e (k)]+re[l H (C H w f)] = E[e(k)e (k)]+ 1 2 LH (C H w f)+ 1 2 LT (C T w f )

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 62/115 Constrained Optimal Filtering Using Lagrange multipliers, we form ξ(k) = E[e(k)e (k)]+l T R Re[CH w f]+l T I Im[CH w f] We can also represent the above expression with a complex L given by L R +jl I such that ξ(k) = E[e(k)e (k)]+re[l H (C H w f)] = E[e(k)e (k)]+ 1 2 LH (C H w f)+ 1 2 LT (C T w f ) Noting that e(k) = d(k) w H x(k), we compute: w ξ(k) = w {E[e(k)e (k)]+ 1 2 LH (C H w f)+ 1 } 2 LT (C T w f ) = E[ 2x(k)e (k)]+0+cl = 2E[x(k)d (k)]+2e[x(k)x H (k)]w+cl

Constrained Optimal Filtering By using R = E[x(k)x H (k)] and p = E[d (k)x(k)], the gradient is equated to zero and the results can be written as (note that stationarity was assumed for the input and reference signals): 2p+2Rw+CL = 0

Constrained Optimal Filtering By using R = E[x(k)x H (k)] and p = E[d (k)x(k)], the gradient is equated to zero and the results can be written as (note that stationarity was assumed for the input and reference signals): 2p+2Rw+CL = 0 Which leads to w = 1 2 R 1 (2p CL)

Constrained Optimal Filtering By using R = E[x(k)x H (k)] and p = E[d (k)x(k)], the gradient is equated to zero and the results can be written as (note that stationarity was assumed for the input and reference signals): 2p+2Rw+CL = 0 Which leads to w = 1 2 R 1 (2p CL) If we pre-multiply the previous expression by C H and use C H w = f, we find L: L = 2(C H R 1 C) 1 (C H R 1 p f)

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 63/115 Constrained Optimal Filtering By using R = E[x(k)x H (k)] and p = E[d (k)x(k)], the gradient is equated to zero and the results can be written as (note that stationarity was assumed for the input and reference signals): 2p+2Rw+CL = 0 Which leads to w = 1 2 R 1 (2p CL) If we pre-multiply the previous expression by C H and use C H w = f, we find L: L = 2(C H R 1 C) 1 (C H R 1 p f) By replacing L, we obtain the Wiener solution for the linearly constrained adaptive filter: w opt = R 1 p+r 1 C(C H R 1 C) 1 (f C H R 1 p)

Constrained Optimal Filtering The optimal solution for LCAF: w opt = R 1 p+r 1 C(C H R 1 C) 1 (f C H R 1 p)

Constrained Optimal Filtering The optimal solution for LCAF: w opt = R 1 p+r 1 C(C H R 1 C) 1 (f C H R 1 p) Note that if d(k) = 0, then p = 0, and we have (LCMV): w opt = R 1 C(C H R 1 C) 1 f

Constrained Optimal Filtering The optimal solution for LCAF: w opt = R 1 p+r 1 C(C H R 1 C) 1 (f C H R 1 p) Note that if d(k) = 0, then p = 0, and we have (LCMV): w opt = R 1 C(C H R 1 C) 1 f Yet with d(k) = 0 but f = 1 (MVDR) w opt = R 1 C(C H R 1 C) 1

Constrained Optimal Filtering The optimal solution for LCAF: w opt = R 1 p+r 1 C(C H R 1 C) 1 (f C H R 1 p) Note that if d(k) = 0, then p = 0, and we have (LCMV): w opt = R 1 C(C H R 1 C) 1 f Yet with d(k) = 0 but f = 1 (MVDR) w opt = R 1 C(C H R 1 C) 1 For this case, d(k) = 0, the cost function is termed minimum output energy (MOE) and is given by E[ e(k) 2 ] = w H Rw

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 64/115 Constrained Optimal Filtering The optimal solution for LCAF: w opt = R 1 p+r 1 C(C H R 1 C) 1 (f C H R 1 p) Note that if d(k) = 0, then p = 0, and we have (LCMV): w opt = R 1 C(C H R 1 C) 1 f Yet with d(k) = 0 but f = 1 (MVDR) w opt = R 1 C(C H R 1 C) 1 For this case, d(k) = 0, the cost function is termed minimum output energy (MOE) and is given by E[ e(k) 2 ] = w H Rw Also note that in case we do not have constraints (C and f are nulls), the optimal solution above becomes the unconstrained Wiener solution R 1 p.

The GSC We start by doing a transformation in the coefficient vector. Let T = [C B] such that [ ] wu w = T w = [C B] = C w U B w L w L

The GSC We start by doing a transformation in the coefficient vector. Let T = [C B] such that [ ] wu w = T w = [C B] w L = C w U B w L Matrix B is usually called the Blocking Matrix and we recall that C H w = g such that C H w = C H C w U C H B w L = g.

The GSC We start by doing a transformation in the coefficient vector. Let T = [C B] such that [ ] wu w = T w = [C B] w L = C w U B w L Matrix B is usually called the Blocking Matrix and we recall that C H w = g such that C H w = C H C w U C H B w L = g. If we impose the condition B H C = 0 or, equivalently, C H B = 0, we will have w U = (C H C) 1 g.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 65/115 The GSC We start by doing a transformation in the coefficient vector. Let T = [C B] such that [ ] wu w = T w = [C B] w L = C w U B w L Matrix B is usually called the Blocking Matrix and we recall that C H w = g such that C H w = C H C w U C H B w L = g. If we impose the condition B H C = 0 or, equivalently, C H B = 0, we will have w U = (C H C) 1 g. w U is fixed and termed the quiescent weight vector; the minimization process will be carried out only in the lower part, also designated w GSC = w L.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 66/115 The GSC It is shown below how to split the transformation matrix into two parts: a fixed path and an adaptive path. input signal MN 1 w(k) reference signal Σ + MN MN T MN 1 w(k) + Σ MN p C MN p B 1 w U (k) w (k) L p MN p Σ + (a) (b) MN MN p C MN p B p MN p 1 w U (k) 1 w (k) L + Σ + Σ (c) input signal F B w(k) GSC + Σ reference signal + Σ (d)

This structure (detailed below) was named the Generalized Sidelobe Canceller (GSC). x (k) F B x GSC H (k) = B x(k) w(k 1) GSC H F x(k) + Σ y (k) GSC y(k) + e(k) d(k) Σ The GSC

This structure (detailed below) was named the Generalized Sidelobe Canceller (GSC). x (k) F B x GSC H (k) = B x(k) w(k 1) GSC H F x(k) + Σ y (k) GSC y(k) + e(k) d(k) Σ The GSC It is always possible to have the overall equivalent coefficient vector which is given by w = F Bw GSC.

This structure (detailed below) was named the Generalized Sidelobe Canceller (GSC). x (k) F B x GSC H (k) = B x(k) w(k 1) GSC H F x(k) + Σ y (k) GSC y(k) + e(k) d(k) Σ The GSC It is always possible to have the overall equivalent coefficient vector which is given by w = F Bw GSC. If we pre-multiply last equation by B H and isolate w GSC, we find w GSC = (B H B) 1 B H w.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 67/115 This structure (detailed below) was named the Generalized Sidelobe Canceller (GSC). x (k) F B x GSC H (k) = B x(k) w(k 1) GSC H F x(k) + Σ y (k) GSC y(k) + e(k) d(k) Σ The GSC It is always possible to have the overall equivalent coefficient vector which is given by w = F Bw GSC. If we pre-multiply last equation by B H and isolate w GSC, we find w GSC = (B H B) 1 B H w. Knowing that T = [C B] and that T H T = I, it follows that P = I C(C H C) 1 C H = B(B H B)B H.

The GSC A simple procedure to find the optimal GSC solution comes from the unconstrained Wiener solution applied to the unconstrained filter: w GSC OPT = R 1 GSC p GSC

The GSC A simple procedure to find the optimal GSC solution comes from the unconstrained Wiener solution applied to the unconstrained filter: w GSC OPT = R 1 GSC p GSC From the figure, it is clear that: R GSC = E[x GSC x H GSC ] = E[BH xx H B] = B H RB

The GSC A simple procedure to find the optimal GSC solution comes from the unconstrained Wiener solution applied to the unconstrained filter: w GSC OPT = R 1 GSC p GSC From the figure, it is clear that: R GSC = E[x GSC x H GSC ] = E[BH xx H B] = B H RB The cross-correlation vector is given as: p GSC = E[d GSCx GSC ] = E{[F H x d] [B H x]} = E[ B H d x+b H xx H F] = B H p+b H RF

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 68/115 The GSC A simple procedure to find the optimal GSC solution comes from the unconstrained Wiener solution applied to the unconstrained filter: w GSC OPT = R 1 GSC p GSC From the figure, it is clear that: R GSC = E[x GSC x H GSC ] = E[BH xx H B] = B H RB The cross-correlation vector is given as: p GSC = E[d GSCx GSC ] = E{[F H x d] [B H x]} = E[ B H d x+b H xx H F] = B H p+b H RF and w GSC OPT = (B H RB) 1 ( B H p+b H RF)

A common case is when d(k) = 0: x (k) F B x GSC H (k) = B x(k) H d (k) = F x(k) GSC w(k 1) GSC + Σ e (k) GSC y (k) GSC The GSC

A common case is when d(k) = 0: x (k) F B x GSC H (k) = B x(k) H d (k) = F x(k) GSC w(k 1) GSC + Σ e (k) GSC y (k) GSC The GSC We have dropped the negative sign that should exist according to the notation used. Although we define e GSC (k) = e(k), the inversion of the sign in the error signal actually results in the same results because the error function is always based on the absolute value.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 69/115 A common case is when d(k) = 0: x (k) F B x GSC H (k) = B x(k) H d (k) = F x(k) GSC w(k 1) GSC + Σ e (k) GSC y (k) GSC The GSC We have dropped the negative sign that should exist according to the notation used. Although we define e GSC (k) = e(k), the inversion of the sign in the error signal actually results in the same results because the error function is always based on the absolute value. In this case, the optimum filter w OPT is: F Bw GSC OPT = F B(B H RB) 1 B H RF = R 1 C(C H R 1 C) 1 f (LCMV solution)

The GSC Choosing the blocking matrix: B plays an important role since its choice determines computational complexity and even robustness against numerical instability.

The GSC Choosing the blocking matrix: B plays an important role since its choice determines computational complexity and even robustness against numerical instability. Since the only need for B is having its columns forming a basis orthogonal to the constraints, B H C = 0, a myriad of options are possible.

The GSC Choosing the blocking matrix: B plays an important role since its choice determines computational complexity and even robustness against numerical instability. Since the only need for B is having its columns forming a basis orthogonal to the constraints, B H C = 0, a myriad of options are possible. Let us recall the paper by Griffiths and Jim where the term GSC was coined; let 1 1 1 1 0 0 0 0 0 0 0 0 C T = 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 70/115 The GSC Choosing the blocking matrix: B plays an important role since its choice determines computational complexity and even robustness against numerical instability. Since the only need for B is having its columns forming a basis orthogonal to the constraints, B H C = 0, a myriad of options are possible. Let us recall the paper by Griffiths and Jim where the term GSC was coined; let 1 1 1 1 0 0 0 0 0 0 0 0 C T = 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 With simple constraint matrices, simple blocking matrices satisfying B T C = 0 are possible.

The GSC For this particular example, the paper presents two possibilities. The first one (orthogonal) is:

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 71/115 The GSC For this particular example, the paper presents two possibilities. The first one (orthogonal) is: B T 1 = 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1

And the second possibility (non-orthogonal) is: The GSC

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 72/115 The GSC And the second possibility (non-orthogonal) is: B T 2 = 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1

The GSC SVD: the blocking matrix can be produced with the following Matlab command lines, [U,S,V]=svd(C); B3=U(:,p+1:M*N); % p=n in this case

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 73/115 The GSC SVD: the blocking matrix can be produced with the following Matlab command lines, [U,S,V]=svd(C); B3=U(:,p+1:M*N); % p=n in this case B T 3 is given by: 0.50 0.17 0.17 0.83 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.25 0.42 0.08 0.08 0.75 0.25 0.25 0.25 0.00 0.00 0.00 0.00 0.25 0.42 0.08 0.08 0.25 0.75 0.25 0.25 0.00 0.00 0.00 0.00 0.25 0.42 0.08 0.08 0.25 0.25 0.75 0.20 0.00 0.00 0.00 0.00 0.25 0.42 0.08 0.08 0.25 0.25 0.25 0.75 0.00 0.00 0.00 0.00 0.25 0.08 0.42 0.08 0.00 0.00 0.00 0.00 0.75 0.25 0.25 0.25 0.25 0.08 0.42 0.08 0.00 0.00 0.00 0.00 0.25 0.75 0.25 0.25 0.25 0.08 0.42 0.08 0.00 0.00 0.00 0.00 0.25 0.25 0.75 0.25 0.25 0.08 0.42 0.08 0.00 0.00 0.00 0.00 0.25 0.25 0.25 0.75

QRD: the blocking matrix can be produced with the following Matlab command lines, [Q,R]=qr(C); B4=Q(:,p+1:M*N); The GSC

QRD: the blocking matrix can be produced with the following Matlab command lines, [Q,R]=qr(C); B4=Q(:,p+1:M*N); B 4 was identical to B 3 (SVD). The GSC

QRD: the blocking matrix can be produced with the following Matlab command lines, [Q,R]=qr(C); B4=Q(:,p+1:M*N); B 4 was identical to B 3 (SVD). The GSC Two other possibilities are: the one presented in [Tseng Griffiths 88] where a decomposition procedure is introduced in order to offer an effective implementation structure and the other one concerned to a narrowband BF implemented with GSC where B is combined with a wavelet transform [Chu Fang 99].

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 74/115 QRD: the blocking matrix can be produced with the following Matlab command lines, [Q,R]=qr(C); B4=Q(:,p+1:M*N); B 4 was identical to B 3 (SVD). The GSC Two other possibilities are: the one presented in [Tseng Griffiths 88] where a decomposition procedure is introduced in order to offer an effective implementation structure and the other one concerned to a narrowband BF implemented with GSC where B is combined with a wavelet transform [Chu Fang 99]. Finally, a new efficient linearly constrained adaptive scheme which can also be visualized as a GSC structure can be found in [Campos&Werner&Apolinário IEEE-TSP Sept. 2002].

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 75/115 Outline 1. Introduction and Fundamentals 2. Sensor Arrays and Spatial Filtering 3. Optimal Beamforming 4. Adaptive Beamforming 5. DoA Estimation with Microphone Arrays

4. Adaptive Beamforming Microphone-Array Signal Processing, c Apolinárioi & Campos p. 76/115

4.1 Introduction Microphone-Array Signal Processing, c Apolinárioi & Campos p. 77/115

Introduction Scope: instead of assuming knowledge about the statistical properties of the signals, beamformers are designed based on statistics gathered online.

Introduction Scope: instead of assuming knowledge about the statistical properties of the signals, beamformers are designed based on statistics gathered online. Different algorithms may be employed for iteratively approximating the desired solution.

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 78/115 Introduction Scope: instead of assuming knowledge about the statistical properties of the signals, beamformers are designed based on statistics gathered online. Different algorithms may be employed for iteratively approximating the desired solution. We will briefly cover a small subset of algorithms for constrained adaptive filters.

Introduction Linearly constrained adaptive filters (LCAF) have found application in numerous areas, such as spectrum analysis, spatial-temporal processing, antenna arrays, interference suppression, among others.

Introduction Linearly constrained adaptive filters (LCAF) have found application in numerous areas, such as spectrum analysis, spatial-temporal processing, antenna arrays, interference suppression, among others. LCAF algorithms incorporate into the solution application-specific requirements translated into a set of linear equations to be satisfied by the coefficients.

Introduction Linearly constrained adaptive filters (LCAF) have found application in numerous areas, such as spectrum analysis, spatial-temporal processing, antenna arrays, interference suppression, among others. LCAF algorithms incorporate into the solution application-specific requirements translated into a set of linear equations to be satisfied by the coefficients. For example, if direction of arrival of the signal of interest is known, jammer suppression can take place through spatial filtering without the need of training signal, or

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 79/115 Introduction Linearly constrained adaptive filters (LCAF) have found application in numerous areas, such as spectrum analysis, spatial-temporal processing, antenna arrays, interference suppression, among others. LCAF algorithms incorporate into the solution application-specific requirements translated into a set of linear equations to be satisfied by the coefficients. For example, if direction of arrival of the signal of interest is known, jammer suppression can take place through spatial filtering without the need of training signal, or in systems with constant-envelope modulation (e.g., M-PSK), a constant-modulus constraint can mitigate multipath propagation effects.

4.2 Constrained FIR Filters Microphone-Array Signal Processing, c Apolinárioi & Campos p. 80/115

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 81/115 Broadband Array Beamformer x (k) 1 w (k) 1 x (k) 2 w (k) 2 Σ y(k) x (k) M w (k) M Algorithm: min ξ(k) w H s.t. C w=f e(k) Σ - d(k) +

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 82/115 Optimal Constrained MSE Filter We look for where min w ξ(k) s.t. CH w = f, ξ(k) = E[ e(k) 2 ] C is the MN p constraint matrix f is the p 1 gain vector

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 83/115 Optimal Constrained MSE Filter The optimal beamformer is w(k) = R 1 p+r 1 C ( C H R 1 C ) 1( f C H R 1 p ) where: R = E [ x(k)x H (k) ] and p = E[d (k)x(k)] w(k) = [ w1(k) T w2(k) T wm T (k)] T x(k) = [ x T 1(k) x T 2(k) x T M (k)] T x T i (k) = [x i (k) x i (k 1) x i (k N +1)]

The Constrained LS Beamformer In the absence of statistical information, we may choose [ ] k min ξ(k) = λ k i d(i) w H x(i) 2 s.t. C H w = f w with λ (0,1], i=0

The Constrained LS Beamformer In the absence of statistical information, we may choose [ ] k min ξ(k) = λ k i d(i) w H x(i) 2 s.t. C H w = f w i=0 with λ (0,1], which gives, as solution,

Microphone-Array Signal Processing, c Apolinárioi & Campos p. 84/115 The Constrained LS Beamformer In the absence of statistical information, we may choose [ ] k min ξ(k) = λ k i d(i) w H x(i) 2 s.t. C H w = f w i=0 with λ (0,1], which gives, as solution, where w(k) = R 1 (k)p(k) +R 1 (k)c ( C H R 1 (k)c ) 1[ f C H R 1 (k)p(k) ], R(k) = k i=0 λk i x(i)x H (i), and p(k) = k i=0 λk i d (i)x(i).

The Constrained LMS Algorithm A (cheaper) alternative cost function is [ min ξ(k) = w(k) w(k 1) 2 +µ e(k) 2] s.t.c H w(k) = f, w

The Constrained LMS Algorithm A (cheaper) alternative cost function is min w [ ξ(k) = w(k) w(k 1) 2 +µ e(k) 2] s.t.c H w(k) = f, which gives, as solution,

The Constrained LMS Algorithm A (cheaper) alternative cost function is min w [ ξ(k) = w(k) w(k 1) 2 +µ e(k) 2] s.t.c H w(k) = f, which gives, as solution, w(k) = w(k 1)+µe (k) [ I C ( C H C ) ] 1 C H x(k), where e(k) = d(k) w H (k 1)x(k), µ is a positive small constant called step size.