The Analysis of Data Sequences in the Time and Frequency Domains 1

Size: px
Start display at page:

Download "The Analysis of Data Sequences in the Time and Frequency Domains 1"

Transcription

1 The Analysis of Data Sequences in the Time and Frequency Domains 1 D. E. Smylie 1 c D. E. Smylie, 2004.

2 Contents 1 Time Domain Data Sequences Discrete, Equispaced Data Sequences Power and Energy Signals, Wavelets Convolution and the z-transform Expected Value, Auto and Crosscorrelation Impulse, White Noise and Wold Decomposition The Time Reverse Properties of Wavelets Optimum Filters and Deconvolution Linear, Optimum Filtering Prediction and Prediction Error Filters Removal of Ghosts and Reverberations Delay and Filter Stability Exact or Ideal Inverses and Deconvolution Approximate Inverses and Deconvolution Optimum Shaping Filters The Levinson Algorithm Fourier Methods The Finite Fourier Transform The DFT for Non-Equispaced Data i

3 ii CONTENTS 3.3 Fourier Series and Transforms Convolution The Effect of Finite Record Length The Effects of Discrete Sampling The Fast Fourier Transform Digital Filters Low Pass Digital Filters High Pass Digital Filters Digital Notch Filters Recursive Digital Filters Power Spectral Density Estimation Autocorrelation and Spectral Density Multiple Discrete Segment Estimate Overlapping Segment Analysis

4 Chapter 1 Time Domain Data Sequences 1.1 Discrete, Equispaced Data Sequences We begin with a consideration of the simplest data sequences to treat, those that result from equispaced sampling of a continuous function of time. With the nearly ubiquitous use of digital equipment, these are perhaps now the most common data sequences arising in practice. The effects of the sampling process itself will be treated later. In general, such time sequences will be denoted by the indefinite sequence of numbers, f 1, f 0, f 1,, f j,. The time index is subscripted, a subscripted zero with underarrow indicating the origin of the time axis. The underarrow will be used generally to indicate the position of the time axis origin. The numbers f j can be complex-valued. Although there are relatively few practical examples of complex valued time sequences, this generalization does not complicate, and in many ways simplifies, the analysis and the resulting formulae. 1

5 2 CHAPTER 1. TIME DOMAIN DATA SEQUENCES 1.2 Power and Energy Signals, Wavelets The average power in the finite time sequence is 1 2N + 1 f N, f N+1,, f 0,, f N 1, f N j= N f j 2 = 1 2N + 1 j= N f j f j, (1.2.1) where the superscript asterisk denotes complex conjugation. In general, time sequences which obey the restriction 1 lim f j 2 < (1.2.2) N 2N + 1 or lim N 1 2N + 1 j= N j= N f j f j < (1.2.3) are called power signals. Power signals are the most intense time sequences we can expect to meet in practice. Often a signal builds up from a very low level and then fades again. Such signals will obviously have zero power when averaged over all time. They do, however, have finite energy. Energy signals obey f j 2 = f j fj <. (1.2.4) j= j= Finally, there is a third class of time sequences. These are zero until a certain time, often the time when an event such as an earthquake or other energy releasing process takes place. Such time series are one-sided and have the form, 0, 0, 0, f 0, f 1, f 2,. (1.2.5)

6 1.3. CONVOLUTION AND THE Z-TRANSFORM 3 They are called wavelets. Nothing in what we have said thus far limits the duration of a time sequence. In practice, we will always deal with series of finite duration. In theoretical developments, we will often consider them to be of unlimited duration. 1.3 Convolution and the z-transform Among the important operations that can be performed with two time sequences is convolution. Given the two sequences with general terms f j and g j or and the convolution of f j with g j is, f 1, f 0, f 1,, f j,, g 1, g 0, g 1,, g j,, h j = k= f k g j k. (1.3.1) Notice that this is the same as the convolution of g j with f j, for we may write l = j k, and get h j = l= f j l g l = l= g l f j l = k= g k f j k. (1.3.2) In general, the convolution of two energy signals results in an energy signal and the convolution of an energy signal and a power signal results in a power signal, while the convolution of two power signals does not exist.

7 4 CHAPTER 1. TIME DOMAIN DATA SEQUENCES The algebra of convolution and other operations with time sequences is facilitated by the use of the z-transform. Given the time sequence its z-transform is defined as, f 1, f 0, f 1,, f j,, F (z) = + f 1 z + f 0 + f 1 z + + f j z j +. (1.3.3) The time sequence which results from convolving f j with g j, which we call h j, has a z-transform that is the product of the z-transforms of f j and g j. That is We have H (z) = F (z) G (z). (1.3.4) G (z) = + g 1 z + g 0 + g 1 z + + g j z j +, thus ( F (z) G (z) = + f ) 1 z + f 0 + f 1 z + + f j z j + ( + g ) 1 z + g 0 + g 1 z + + g j z j + ( = + f 1g 1 + f 1g 0 + f 0 g 1 + z 2 z f 1 g 1 + f 0 g 0 + f 1 g 1 + ( f 0 g 1 + f 1 g 0 ) z + ( f 1 g 1 ) z 2 + ). (1.3.5) When all the terms are included, the coefficient of the term in z j is clearly of the form + f 1 g j+1 + f 0 g j + f 1 g j f k g j k +, (1.3.6)

8 1.3. CONVOLUTION AND THE Z-TRANSFORM 5 representing the general term of the convolution. ( ) Example: What is the convolution of the energy signal 4, 2, 1 ( ) with the energy signal 3, 5, 2, where the underarrow indicates the time origin? We examine three methods of performing the required convolution: (i) reverse, shift, multiply and add method ( ) ( ) First, reverse 3, 5, 2 to give 2, 5, 3. Then shift it with respect ( ) to 4, 2, 1 by j units of time if we wish to find the jth member of the convolution, multiply aligned numbers and add products. The sequence of operations is as follows; ( = = ( The resulting convolution is written as ) 12, 26, 21, 9, =21 4, 2 ) (, 1 3, 5 ), 2 = (ii) inverse of the product of z-transforms The z-transform of the convolution is the product of the z- transforms, (4/z z) (3/z z) = ( 12 z + 26 ) z + 2 2z2. z Inverting the z-transform, the convolution as before is given by

9 6 CHAPTER 1. TIME DOMAIN DATA SEQUENCES ( ) 12, 26, 21, 9, 2. (iii) magic square (or rectangle) The first sequence is written along the top of a two dimensional array, the second down the left side. Thus, we have Elements of the array are filled in as the product of the first row element with the corresponding leading column element. The convolution is then found by summing the anti-diagonals of the two dimensional array. The zero time index element is the sum along the antidiagonal through the intersection of the zero time index row element and the zero time index column element. Once again, we obtain the convolution as ( 12, 26, 21, 9, 2 ). 1.4 Expected Value, Auto and Crosscorrelation Thus far we have avoided the question as to whether or not the time sequences we are considering are deterministic or stochastic (random). We will, in fact, treat both cases. For the treatment, we will need to use the expected value operator E { }. We know from statistics that the expected value of a stochastic variable is found by multiplying a given value of the variable by the probability that it can take on this value, and averaging the result over all possible values of the variable. We can regard a time sequence as the realization of a sequence of random variables generated by a stochastic process. Each trial of the process will result in a new time sequence. The

10 1.4. EXPECTED VALUE, AUTO AND CROSSCORRELATION 7 expected value of any quantity at a specific time, say t 0, is found by averaging across an infinite ensemble of realizations of the stochastic process as illustrated in Figure 1.1. Figure 1.1: Successive realizations of a stochastic process. The expected value of a random variable at a specific time t 0 is found by averaging across an infinite ensemble of such realizations. Since the expected value operator is an averaging operator, it is linear. That is, the expected value of a constant times a stochastic variable is equal to the constant times the expected value of the stochastic variable. In addition, the expected value of a sum of stochastic variables is equal to the sum of the expected values of the stochastic variables. The autocorrelation of f j at lag k, time index l is φ ff (k, l) = E {f l f l k}. (1.4.1) At zero lag, the autocorrelation obviously gives just the mean squared amplitude of the sequence at time index l.

11 8 CHAPTER 1. TIME DOMAIN DATA SEQUENCES Often, the stochastic processes with which we will be dealing will be stationary. That is, their statistical properties are independent of translations along the time axis. Then, the autocorrelation is independent of the time index and we write it as φ ff (k) = E {f l f l k }. (1.4.2) The autocorrelation for stationary sequences is Hermitian. We have φ ff ( k) = E {f l f l+k }, (1.4.3) writing m for l + k, φ ff ( k) = E {fm f m k} = φ ff (k). (1.4.4) Functions which change to their complex conjugate when the sign of their argument is reversed are called Hermitian. When a time series is stationary, averaging across an infinite ensemble of independent realizations is equivalent to averaging along a single time record. This is known as the ergodic hypothesis. For stationary power signals, the autocorrelation can then be equivalently defined as 1 φ ff (k) = lim N 2N + 1 l= N f l f l k. (1.4.5) In general, this will not be a satisfactory definition for energy signals since it could give zero everywhere. We define the autocorrelation for an energy signal as φ ff (k) = l= f l f l k. (1.4.6)

12 1.5. IMPULSE, WHITE NOISE AND WOLD DECOMPOSITION 9 At first sight, this definition may appear to be inconsistent with the ensemble average definition of autocorrelation. However, if we take the energy signal to be deterministic but to begin at random times and form the resulting stochastic process, it is a consistent definition for such a process. We will return to this question later. The crosscorrelation of f j with g j at lag k, time index l, is defined as φ fg (k, l) = E {f l g l k}. (1.4.7) Again, for stationary sequences, the dependence on time index disappears and we have φ fg (k) = E {f l gl k } = E {g m f m+k} = φ gf ( k). (1.4.8) Accepting the ergodic hypothesis for stationary power signals yields the equivalent definition 1 φ fg (k) = lim f l g N 2N + 1 l k. (1.4.9) l= N For two energy signals, or one energy signal and one stationary power signal φ fg (k) = f l gl k. (1.4.10) l= Once again, we leave the mysterious change in definition as unfinished business. 1.5 Impulse, White Noise and Wold Decomposition Thus far, we have avoided discussing time sequences with specific statistical properties. Let us now consider a time sequence

13 10 CHAPTER 1. TIME DOMAIN DATA SEQUENCES which is completely uncorrelated. Then,, n 1, n 0, n 1,, n j, φ nn (k, l) = E {n l n l k } = δ0 k n ln l, (1.5.1) where δk 0 is the Kronecker delta, if it is completely uncorrelated. Such a time sequence is called a white noise sequence. If it is stationary and of unit power, then The time sequence φ nn (k) = δ 0 k. (1.5.2), 0, 0, 1, 0, 0, is called the unit impulse sequence. Its general term f j is equal to δj 0. Thus, the autocorrelation of a white noise sequence of unit power is the unit impulse sequence. Wold (1938) first proved a very important theorem called the Wold Decomposition Theorem (see Box, Jenkins and Reinsel, 1994). It states that a stationary stochastic sequence may be decomposed into the convolution of a deterministic energy signal with a white noise sequence of unit power. Let us now represent deterministic energy signals by the stochastic process that results from the convolution with a white noise series of unit power. First, if we convolved the deterministic energy signal f j with the unit impulse sequence, we would get h j = f k δj k 0 = f j, (1.5.3) k=

14 1.5. IMPULSE, WHITE NOISE AND WOLD DECOMPOSITION 11 simply the deterministic energy signal. Now, the convolution with the white noise sequence of unit power gives h j = k= f k n j k. (1.5.4) This represents the superposition of deterministic energy signals f j starting at random times with random amplitudes. Now, suppose we represent deterministic energy signals by the stationary stochastic process which results from their convolution with white noise of unit power. For the deterministic energy signal f j, we use the stationary stochastic sequence Its autocorrelation is h j = φ hh (k) =E {h l h l k} = E = = m= n= m= n= k= { m= f k n j k. (1.5.5) f m n l m f m f n E {n l mn l k n } f m f n δ0 k m+n = m= n= f nn l k n } f m f m k = φ ff (k). (1.5.6) Thus, our special definition of autocorrelation for energy signals is consistent with the ensemble average for stationary stochastic processes, provided we use the equivalent stationary stochastic process to represent the energy signal.

15 12 CHAPTER 1. TIME DOMAIN DATA SEQUENCES Similarly, we can construct stationary stochastic processes which lead to φ fg (k) = f l gl k (1.5.7) l= as a consistent definition of crosscorrelation for two energy signals. In the case of one energy signal f j and one stationary power signal g j, we write h j = f k n j k (1.5.8) and g j = k= k= a k n j k, (1.5.9) where, by the Wold decomposition theorem, a k is a deterministic energy signal. Then φ hg (k) =E {h l gl k} { =E f m n l m = = m= m= n= m= n= a n n l k n } f n a ne {n l m n l k n} = m= n= f m a nδ 0 k m+n f m a m k. (1.5.10) Thus, in the relaxed definition of crosscorrelation, for one energy signal and one power signal, we need to replace the power signal by its equivalent deterministic energy signal found by Wold decomposition.

16 1.6. THE TIME REVERSE The Time Reverse The time reverse of the sequence f j is the sequence f j. If f j and g j are two energy signals, then the crosscorrelation of f j with g j is the same as the convolution of f j with the time reverse of g j. That is φ fg = f l gl k = f l h k l (1.6.1) l= l= which is the convolution of f j with h j. Thus, h k l = g l k or h j = g j or h j is the time reverse of g j. Similarly, the autocorrelation of an energy signal is equal to its convolution with its own time reverse. 1.7 Properties of Wavelets Recall that wavelets have the form, 0, 0, f 0, f 1,, f j,. (1.7.1) In practice, we will deal only with finite wavelets. We write them as ( ), f 1, f 2,, f n. (1.7.2) f 0 The z-transform of a finite wavelet is the polynomial F (z) = f 0 + f 1 z + f 2 z f n z n. (1.7.3) We can factor this polynomial to F (z) = f n (z 1 + z) (z 2 + z) (z n + z). (1.7.4) Remembering that in terms of z-transforms, convolution becomes multiplication, we see that F (z) represents the z-transform

17 14 CHAPTER 1. TIME DOMAIN DATA SEQUENCES of the successive convolutions of the dipole wavelets ( ) ( ) ( ) z 1, 1 ; z 2, 1 ; ; z n, 1. (1.7.5) ( ) The reverse wavelet to f 0, f 1,, f n is defined to be the ( ) wavelet fn, f n 1,, f0. Notice that this is not the same ( ) as the time reverse of the wavelet f 0, f 1,, f n. The time reverse would be, 0, 0, fn, f n 1,, f 0, 0, 0,. The reverse wavelet is obtained from the time reverse by shifting it in the positive time direction by n time units. The reverse wavelet has the z-transform R (z) = f n + f n 1z + + f 0 z n. (1.7.6) The time reverse therefore has the z-transform R (z) z n. (1.7.7) Dividing the z-transform by z n gives the z-transform of the wavelet shifted n time units in the negative time direction. Now, the autocorrelation of a wavelet is the same as its convolution with its own time reverse, ( so the z-transform ) of the autocorrelation of the wavelet, f 1,, f n is therefore f 0 Φ (z) = F (z) R (z) z n. (1.7.8) The reverse wavelet of the reverse wavelet is the original wavelet. Therefore, the autocorrelation of the reverse wavelet has the z-transform R (z) F (z) z n = Φ (z), (1.7.9)

18 1.7. PROPERTIES OF WAVELETS 15 identical to that of the original wavelet. We have R (z) = f n + f n f 0 zn (1.7.10) so that R (z) z n = f n z + f n 1 n z + + f n 1 0 ( ) ( ) n ) 1z 1z = (f 0 + f f n ( ) 1 =F z ( ) ( ) ( ) =fn z + z 1 z + z 2 z + z n. (1.7.11) Thus, Φ (z) =F (z) R (z) z n =f n f n (z 1 + z) ( ) 1 = F (z) F z ( 1 z + z 1 ) (z n + z) ( ) 1 z + z n = f n 2 z n (z 1 + z) (1 + z 1z) (z n + z) (1 + z nz). (1.7.12) The reverse ( wavelet ) to( that represented ) by the z-transform (z 1 + z) (which is z 1, 1 ) is 1, z 1 with z-transform 1 + z1 z, and so on. Hence, any number of the component dipole wavelets can be reversed without changing the autocorrelation. Thus, there are 2 n (n + 1)-length wavelets with the same autocorrelations. Wavelets have unique autocorrelations but there is no unique wavelet corresponding to a given autocorrelation.

19 16 CHAPTER 1. TIME DOMAIN DATA SEQUENCES To remove this uncertainty, we define the minimum delay wavelet as the wavelet corresponding to a given autocorrelation which has its dipole components arranged so that the coefficient of greatest magnitude occurs first in each one. If the coefficient of least magnitude occurs first in each dipole component, then the wavelet is said to be of maximum delay. Otherwise it is of mixed delay. Wavelets, whose coefficients differ at most by a complex constant of unit magnitude, have the same autocorrelation and are said to be equivalent. We have shown that for a given (n + 1)-length wavelet, we can find an unique minimum delay wavelet which gives the same autocorrelation. This suggests that for a given autocorrelation there corresponds only one minimum delay wavelet. Let us now prove this. We start with the z-transform of the given autocorrelation, Φ (z) =φ ( n) z n + φ ( n + 1) z n+1 + +φ (0) + + φ (n 1) z n 1 + φ (n) z n. (1.7.13) Therefore, the polynomial P (z) =z n Φ (z) =φ ( n) + φ ( n + 1) z + +φ (0) z n + + φ (n 1) z 2n 1 + φ (n) z 2n (1.7.14) is of degree 2n with 2n roots. Because of the Hermitian property

20 1.7. PROPERTIES OF WAVELETS 17 of the autocorrelation, we can write P (z) =φ ( n) + φ ( n + 1) z + +φ (0) z n + + φ (n 1) z 2n 1 + φ (n) z 2n =φ (n) + φ (n 1) z + +φ (0) z n + φ ( n + 1) z 2n 1 + φ ( n) z 2n =z 2n P (1/z ). (1.7.15) Factoring P (z) gives P (z) =φ (n) (z z 1 ) (z z 2 ) (z z 2n ) =z 2n P (1/z ) = z 2n φ (n) (1/z z 1) (1/z z 2n) =φ (n) (1 z1 z) (1 z 2 z) (1 z 2nz). (1.7.16) Hence, for every root z j there is another root of P (z) at 1/z j. Thus, P (z) can be broken into the product of two polynomials of degree n, one with roots atz 1, z 2,, z n, the other with roots at 1/z 1, 1/z 2,, 1/z n. It is therefore possible to write Φ (z) = F (z) F (1/z ), (1.7.17) where F (z) is the polynomial of degree n Then F (z) = f n (z z 1 ) (z z 2 ) (z z n ). (1.7.18) F (1/z ) = f n (1/z z 1 ) (1/z z 2 ) (1/z z n ). (1.7.19) Hence, Φ (z) =z n P (z) = z n F (z) z n F (1/z ) =F (z) F (1/z ). (1.7.20)

21 18 CHAPTER 1. TIME DOMAIN DATA SEQUENCES The factors of F (z) can be arranged in only one minimum delay way as can be seen from Figure 1.2. In polar form z j = r j e iθ j, z j = r j e iθ, 1/z j = 1 r j e iθ j. When in minimum delay form,the factors of F (z) have the form (z j z) with z j > 1. That is, when F (z) is in minimum delay form, its roots will all lie outside the unit circle. The factors of z n F (1/z ) have the form ( 1 zj z) with roots 1/zj lying inside the unit circle. F (z) then becomes the z-transform of the only (n + 1) length minimum delay wavelet corresponding to Φ (z). Shorter wavelets can be minimum delay but do not produce sufficiently long autocorrelations. Longer wavelets with z-transforms of the forms Figure 1.2: Roots of F (z) in the complex z-plane. F (z) z p,

22 1.7. PROPERTIES OF WAVELETS 19 where p is a positive integer, give the correct autocorrelation, for then Φ (z) = F (z) z p F (1/z ) = F (z) F (1/z ), but are not minimum delay since they contain p of the dipole factors (0 + 1 z) in their transforms, and hence, are not minimum delay. Putting them in minimum delay form would yield p of the factors (1 + 0 z) or 1 p = 1, and the wavelet would be reduced to the (n + 1) length minimum delay wavelet. However, it also follows from this that maximum delay wavelets are not uniquely determined by the autocorrelation. Longer than (n + 1) length maximum or mixed delay wavelets will give the correct autocorrelation. Where do the terms minimum, maximum, mixed delay originate? Consider the z-transform of a wavelet in minimum delay form, F (z) = C (z 1 z) (z 2 z) (z n z). Then, by the definition of minimum delay z 1, z 2,, z n > 1. The first element of the wavelet will then be Cz 1 z 2 z n with magnitude C z 1 z 2 z n > C. In maximum delay form, it will be C (1) n with magnitude C. Thus, the energy build up is least delayed in a minimum delay wavelet. Example: Consider the minimum delay wavelet 1. f 1 = (2, 1) (3, 2) (4, 1). The maximum delay wavelet 2. f 2 = (1, 2) (2, 3) (1, 4) has the same autocorrelation as does the mixed delay wavelet

23 20 CHAPTER 1. TIME DOMAIN DATA SEQUENCES 3. f 3 = (1, 2) (3, 2) (4, 1). Their respective z-transforms are; F 1 (z) = (2 + z) (3 + 2z) (4 + z) =2z z z + 24, F 2 (z) = (1 + 2z) (2 + 3z) (1 + 4z) =24z z z + 2, F 3 (z) = (1 + 2z) (3 + 2z) (4 + z) =4z z z The energy build ups are; = = = = = = = = = = = =1961 Thus, the energy build up is least delayed in the minimum delay wavelet, most delayed in the maximum delay wavelet and the energy build up is in between these extremes in the mixed delay wavelet.

24 Chapter 2 Optimum Filters and Deconvolution 2.1 Linear, Optimum Filtering Linear filtering is an operation by which we replace a given member of a time sequence by a linear combination of itself and neighbouring members. It has roots in subjects such as econometrics, where the operation is referred to as taking moving averages. We are most often using the jargon of communications theory or electrical engineering. It is known that the output of a linear network can be expressed as the convolution of the impulse response characterizing the network with the input. Without going into this immediately, let us examine linear filtering of discrete, equispaced time sequences in terms of the convolution of an impulse response deterministic wavelet g j convolved with an input sequence f j, to give an output sequence h j, where then h j = g k f j k. (2.1.1) k=0 21

25 22 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION Here we take g j to be, 0, 0, g 0, g 1,, g N, 0, a wavelet of finite length (N + 1). The objective of optimum linear filtering is to make h j as close as possible to some desired sequence d j. To do this, we form the error sequence ɛ j = d j h j. (2.1.2) Notice error in time sequence analysis is curiously defined with opposite sign compared to conventional usage in physics. Obviously, we want to minimize the error. Optimum Wiener filters do this in the minimum square sense. That is, we minimize the error power E { { ɛ j ɛj} = E (dj h j ) ( )} d j h j {( ) )} = E d j g k f j k (d j gl fj l = E { k=0 d j d j d j gl f j l d j l=0 { N +E g k f j k k=0 N l=0 g l f j l } l=0 } g k f j k k=0. (2.1.3) How can the error power be altered? Since we wish the input time sequence to be arbitrary, we have only the real and imaginary parts of g 0,, g N to adjust. We know that when all partial derivatives of the error power with respect to Rlg m, Img m vanish, it will be an extremum. Since there obviously can be no maximum to error power (or to how badly the filter performs),

26 2.1. LINEAR, OPTIMUM FILTERING 23 the extremum so obtained must be a minimum. Thus, E { } N } ɛ j ɛ j = E { d j δl m fj l d j δk m f j k Rlg m l=0 k=0 { N ( + E δ m k f j k gl f j l + g ) } kf j k δl m fj l k=0 l=0 = E { d j fj m d j f } j m { N ( + E fj m gk f j k + g ) } kf j k fj m k=0 = 0 (2.1.4) and i E { } ɛ j ɛ j Img { m = E d j f j m d j f j m + k=0 ( fj m gk f j k g ) } kf j k fj m = 0 (2.1.5) for all m, m = 0, 1,, N. Adding these two equations, we get { } E 2d j f j m + 2 f j m gk f j k = 0 (2.1.6) or k=0 gk E { { } f j m fj k} = E d j f j m, m = 0, 1,, N. (2.1.7) k=0 On taking complex conjugates, the system of conditional equations becomes g k E { { f j k fj m} = E dj fj m}, (2.1.8) k=0

27 24 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION for m = 0, 1,, N. Now E { f j k fj m} = φff (m k) and E { d j fj m} = φdf (m) by definition. Therefore, the equations for the optimum Wiener filter read g k φ ff (m k) = φ df (m), m = 0, 1,, N. (2.1.9) k=0 In matrix form, these equations are φ ff (0) φ ff ( 1)... φ ff ( N) g 0 φ ff (1) φ ff (0)... φ ff ( N + 1) g φ ff (N) φ ff (N 1)... φ ff (0) g N φ df (0) φ df (1) =. (2.1.10) φ df (N) Notice that we would get the same result if f j and d j were energy signals. Let us now consider some specific examples. 2.2 Prediction and Prediction Error Filters Suppose we wanted to predict a time sequence one unit ahead from the previous N values. That is, we want to know f j from f j 1, f j 2,, f j N. The prediction is then h j = g k f j k. (2.2.1) k=1

28 2.2. PREDICTION AND PREDICTION ERROR FILTERS 25 By our previous reasoning, the equations which minimize the error power are g k φ ff (m k) = φ df (m), m = 1,, N. (2.2.2) k=1 Notice that the summation starts at k = 1 and that there is one less equation, since the point to be predicted cannot be included in the calculation of the prediction. The conditional equations have the matrix form = φ ff (0)... φ ff ( N + 1) φ ff (1)... φ ff ( N + 2)... φ ff (N 1)... φ ff (0) φ df (1) φ df (N) g 1 g N. (2.2.3) We can say something about the crosscorrelation φ df (j) between the desired output and the input because we know d j to be simply f j, the perfect prediction of f j from past values. Thus φ df (j) = E { d k f k j} = E { fk f k j} = φff (j). (2.2.4) Hence, the unit prediction equations become φ ff (0)... φ ff ( N + 1) g 1... φ ff (N 1)... φ ff (0) g N φ ff (1) = φ ff (N). (2.2.5)

29 26 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION Now consider the error sequence of the prediction or the prediction error, ɛ =d j h j = f j g k f j k = k=1 γ k f j k, (2.2.6) k=0 where γ 0 = 1, γ 1 = g 1,, γ N = g N. Therefore, the filter 1, γ 1,, γ N convolved with the sequence f j produces the prediction error sequence directly. It is called the prediction error filter. The prediction error filter is obtained from the unit prediction filter by taking the first element as unity and switching the signs of the subsequent elements. What is the prediction error power of a (N + 1) length prediction error filter 1, γ 1,, γ N? It is P N+1 =E { {( ) ( } N )} ɛ j ɛ j = E f j g k f j k fj gl f j l =E { f j f j f j { N +E g k f j k k=1 =φ ff (0) + k=1 N l=1 N l=1 k=1 g l f j l f j g l f j l } } g k f j k k=1 l=1 gl E { } N f j fj l g k E { } fj f j k l=1 g k gl E { f j k fj l}. l=1 k=1 (2.2.7)

30 2.2. PREDICTION AND PREDICTION ERROR FILTERS 27 Finally, P N+1 =φ ff (0) + k=1 gl φ ff (l) l=1 g l φ ff (k) k=1 g k gl φ ff (l k). (2.2.8) l=1 Making use of the equations for the filter coefficients g 1,, g N, the expression for the prediction error power, P N+1, can be reduced to P N+1 =E { } ɛ j ɛ j Thus, =φ ff (0) =φ ff (0) gl φ ff (l) l=1 N g k φ ff (k) + gl φ ff (l) k=1 g k φ ff (k) = φ ff (0) + k=1 P N+1 = γ k φ ff (k) = k=0 l=1 γ k φ ff (k). k=1 γ k ( k). (2.2.9) Writing the unit prediction equations in terms of the prediction error coefficients, we have k=0 γ k φ ff (m k) = 0, m = 1,, N. (2.2.10) k=0 Augmenting these equations with the expression for the predic-

31 28 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION tion error, we finally have the prediction error equations φ ff (0)... φ ff ( N) 1 P N+1... γ = 0.. (2.2.11) φ ff (N)... φ ff (0) γ N Removal of Ghosts and Reverberations Up to now, we have been building up a body of basic theory in the treatment of equispaced, digital time sequences. We wish now to proceed to an extensive discussion of applications. Although this is not intended to be a course in seismology, the examples will often be from geophysics, my own specialty. Little imagination should be required to see applications to other fields. In exploration seismology, explosive charges are set off and the seismic waves so generated are reflected from sedimentary, layered structures beneath the Earth s surface, and above the non-sedimentary bedrock or basement. Nearly always, there is a surface layer of unconsolidated, weathered material, sometimes several hundred metres thick. In order to get strong signals in the consolidated sediments, it is usual to drill below the surface layer and set off the charge right in the consolidated material. Reflections from the bottom of the unconsolidated, weathered layer create ghosts which mask the primary reflections which one wishes to see. It is clear that the ghost wavelet will have a similar shape to the primary wavelet but will be attenuated because it has travelled further and because it has suffered imperfect reflection from the weathered layer. It will also be delayed in time by

32 2.3. REMOVAL OF GHOSTS AND REVERBERATIONS 29 twice the time taken to travel from the shot to the weathered layer. The situation is illustrated in Figure 2.1. Figure 2.1: Multiple reflections and ghosts. Thus, if we represent the primary by a spike at time index zero of unit magnitude, the ghost will be a spike at time index τ of amplitude k. τ will be twice the travel time from shot to the weathered layer. k < 1 because the waves suffer attenuation and imperfect reflection. The primary and ghost can then be represented by the minimum delay wavelet (1, k) with the unit element in time index zero and the element k in time index τ. In general, τ will not be an integral number of the time units being used. Thus, the z-transform of the wavelet will be 1+kz τ. If we pass the received signals through a filter with z-transform F (z), we will eliminate ghosts if F (z) is such that (1 + kz τ ) F (z) = 1. (2.3.1) Remember linear filtering is a convolution process. Our filter then must turn the primary signal and ghost into a unit spike with z-transform unity.

33 30 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION The z-transform of the required filter is therefore F (z) = kz τ. (2.3.2) Since the filter is linear, we can superimpose effects and this filter will not only remove the first ghost but all others as well. Figure 2.2: Block diagram of the deghosting filter as a negative feedback system. A block diagram of the deghosting filter as a negative feedback system is shown in Figure 2.2. By the binomial theorem, F (z) = (1 + kz τ ) 1 = 1 kz τ + k 2 z 2τ k 3 z 3τ +. (2.3.3) The impulse response of the deghosting filter, for time units τ long, has the form ( ) 1, k, k 2, k 3, k 4,. (2.3.4) In this simple example, we might take τ to be an integral number of the time units being used and the filter design is quite simple. In practice, deghosting is a little more complicated and we will return to it later.

34 2.3. REMOVAL OF GHOSTS AND REVERBERATIONS 31 Let us now turn to another problem of exploration seismology. In marine exploration, charges are let off just under the sea surface. The sea serves as a wave guide which results in reverberations. The situation is illustrated in Figure 2.3. Figure 2.3: Illustration of the problem of reverberations in seismic exploration at sea. The air-water surface is a near perfect reflector with reflection coefficient -1 as is the water-bottom interface (reflection coefficient 0 < k < 1). The reverberations interfere with reflections from deep horizons. Suppose the receiving transducer measures only down going motions. The wavelet then appears as ( 1, k, k 2, k 3, ), where the time interval between elements of the wavelet is twice the travel time through the sea. k is positive, since the sea bottom appears quite rigid compared to the seawater, but it is less than unity since some energy penetrates the deep layers of the sea bottom.

35 32 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION The z-transform of the received wavelet is then 1 kz τ + k 2 z 2τ k 3 z 3τ 1 + = 1 + kz, τ using the rule for summing an infinite geometric progression (S = 1 1 ar, r < 1). To eliminate reverberations, we would need to pass the signal through a filter with z-transform F (z) = 1 + kz τ. (2.3.5) Hence, we see that the problem of the removal of reverberations is just the opposite of the deghosting problem. As a check on the filter, the received signal is the wavelet ( 1, k, k 2, k 3, ). Convolving it with the impulse response of the filter (1, k), we have ( 1, k, k 2, k 3, ) (1, k) =, 0,, 0, 1, k k, k 2 + k 2, k 3 k 3, ( ) = 1, 0, 0,. (2.3.6) Hence, the filter for removal of reverberations turns the incoming signal into a spike, the primary signal. Since the incoming signal can be regarded as the convolution of the primary signal with the impulse response of the sea and sediments ( 1, k, k 2, k 3, ), we have produced a very simple deconvolution filter to recover the primary signal. Notice that, in this example, we took account only of the down going effects of the water layer. In fact, the received signal has to come up through the same system, so that the z-transform of the received signal is actually ( 1 kz τ + k 2 z 2τ k 3 z 3τ + )2 1 = (1 + kz τ ) 2. (2.3.7)

36 2.4. DELAY AND FILTER STABILITY 33 In the time domain, the received signal is ( 1, k, k 3, ) ( 1, k, k 2, k 3, ) = ( 1, k k, k 2 + k 2 + k 2, k 3 k 3 k 3 k 3, ) = ( 1, 2k, 3k 2, 4k 3, ). (2.3.8) The z-transform of the required filter is Thus, the filtering operation is (1 + kz τ ) 2 = 1 + 2kz τ + k 2 z 2τ. (2.3.9) ( 1, 2k, 3k 2, 4k 3, ) ( 1, 2k, k 2) = ( 1, 2k + 2k, 3k 2 4k 2 + k 2, 4k 3 + 6k 3 2k 3, ) = (1, 0, 0, ). (2.3.10) 2.4 Delay and Filter Stability We saw that the deghosting filter represented a negative feedback system. Now let us consider the connection between the stability of feedback systems and the concepts of minimum, maximum and mixed delay. Consider the feedback system shown in Figure 2.4. What is the impulse ( response ) of this system? That of the constant box is k, 0, 0, ( with z-transform k. That of the unit time ) delay is 0, 1, 0, with z-transform z. Together they have ( ) z-transform kz or impulse response 0, k, 0,. Thus, the output of the constant box is ky t since its input is

37 34 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION Figure 2.4: Block diagram of a feedback system. y t. Input of the unit time delay is ky t, its output is ( ) ( ) 0, 1, 0, ky 0, ky 1, ( ) = 0, ky 0, ky 1, = ky t 1. (2.4.1) Therefore, x t ky t 1 = y t in the time domain. In terms of z- transforms, Y (z) is output, X (z) is input, and the output of the feedback system is kzy (z). Thus, or Y (z) = X (z) kzy (z) (2.4.2) X (z) = (1 + kz) Y (z). (2.4.3) The transfer function of the feedback system is defined as the ratio of the z-transform of the output to the z-transform of the input. This is F (z) = Y (z) X (z) = kz. (2.4.4) Thus Y (z) = F (z) X (z)

38 2.4. DELAY AND FILTER STABILITY 35 with F (z) = kz. The impulse response of the closed feedback system is the inverse of the dipole wavelet ( 1, k k < 1 and maximum delay provided k > 1. We know that by binomial expansion ). It is minimum delay provided F (z) = kz = 1 kz + k2 z 2 k 3 z 3 +, provided kz 1. Thus, the impulse response is ( ) 1, k, k 2, k 3, k 4, k 5,. If k < 1, the impulse response clearly damps out with time. If k > 1, the impulse response clearly grows indefinitely with time. The feedback loop is therefore stable if k < 1, unstable if k > 1. How can we treat unstable systems? This is an important question since we will find it necessary to do so in future problems. Recall the equation of the closed feedback system in the time domain, x t ky t 1 = y t. (2.4.5) Rearrange it to give y t 1 = x t y t k = 1 k x t 1 k y t. (2.4.6) What is the output at time index t 1 = 3? It is y 3 = 1 k x 4 1 k y 4, (2.4.7)

39 36 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION but or but or y 4 = 1 k x 5 1 k y 5 (2.4.8) y 3 = 1 k x 4 1 k 2x k 2y 5 (2.4.9) y 5 = 1 k x 6 1 k y 6 (2.4.10) y 3 = 1 k x 4 1 k 2x k 3x 6 1 k 3y 6. (2.4.11) Clearly, we can continue this sequence indefinitely and get y 3 = 1 k x 4 1 k 2x k 3x 6 1 k 4x 7 +. (2.4.12) Now, if k > 1, this series obviously converges. At what price do we obtain stability in the output? The price is delay because we need to wait for future input values to determine the output. To determine the output exactly, we need to have all future input (an impossibility). In practice, the delay need only be finite to obtain an acceptable approximation to the output. This leads to a slight generalization of a linear filter. In general, we will represent a linear ( filtering operation ) as the convolution of set of coefficients, a 1, a 0, a 1, with the signal x j so that the output is y j = a k x j k. (2.4.13) k= So far, we have considered only one-sided or causal filters, i.e. y j = a k x j k, (2.4.14) k=0

40 2.5. EXACT OR IDEAL INVERSES AND DECONVOLUTION 37 ( ) the wavelet a 0, a 1, a 2, being the impulse response of the filter. Note that it is the impulse response, since ( ) ( ) ( ) a 0, a 1, a 2, 1, 0, 0, = a 0, a 1, a 2,. Such a filter only uses present and past values of the input, and so is called a memory function. We can consider a second possibility, an entirely acausal filter, i.e. y j = 1 k= a k x j k, (2.4.15) ( ) the antiwavelet, a 2, a 1, 0, 0, being the impulse response. Such a filter only uses future values of the input and so is called an anticipation function. In principle, an anticipation filter cannot be used since all future data is needed for its operation. It is neither physically nor mathematically realizable. In practice, all we have to do is wait a finite length of time for a finite amount of future data to appear. We then can process it with a finite length anticipation filter to get an approximately correct result. Often we are dealing with prerecorded data, and we already have future values. 2.5 Exact or Ideal Inverses and Deconvolution We now turn to the problem of deconvolution. Let us begin with the simplest case of all, that in which the output is the result of convolution with a two-length dipole filter.

41 38 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION Wavelets may be arbitrarily normalized by dividing by a complex ( constant, so that if the dipole filter has impulse response b 0, b 1 ), we can divide by b 0 to obtain the equivalent wavelet ( ) 1, k with k = b 1 /b 0, where k is complex, in general. Thus, if the input sequence is x giving an output sequence y, we have ( ) ( ) y = b x =, b 1, x 1, or b 0 x 0 y j = b 0 x j + b 1 x j 1. Normalizing the output with b 0 gives y j = x j + kx j 1. In the deconvolution problem, we are given the output signal y j and seek the input signal x j. That is, we wish to find a filter a j such that a y = a (b x) = (a b) x = x. (2.5.1) Now the unit spike or impulse convolved with any time sequence gives the identical time sequence or ( ) ( ) 1, 0, 0, 0,, x 2, x 1, x 0, x 1, x 2, ( ) =, x 2, x 1, x 0, x 1, x 2, (2.5.2) so that if we can find a such that a b = δ, δ = ( 1, 0, 0, 0, we have solved the deconvolution problem. a will be called the inverse to b or b 1. In the present case, in terms of z-transforms, we wish to find a sequence a such that ( a0 + a 1 z + a 2 z 2 + ) (1 + kz) = 1 (2.5.3) ),

42 2.5. EXACT OR IDEAL INVERSES AND DECONVOLUTION 39 or a 0 +a 1 z+a 2 z 2 + = kz = 1 kz+k2 z 2 k 3 z 3 +. (2.5.4) Thus, a is ( 1, k, k 2, k 3, ). This is a stable inverse to b only for k < 1. In other words, we can only solve the deconvolution problem in this manner if the output is the product of convolution with a minimum delay filter. If k > 1, we seek an antiwavelet (, a 3, a 2, a 1 ) as an inverse to b. Using z-transforms ( + a 3 z 3 + a 2 z 2 + a 1 z 1) (1 + kz) = 1 (2.5.5) or ( + a 3 z 3 + a 2 z 2 + a 1 z 1) = kz 1 = kz (1 + 1/kz) = 1 ( 1 1 kz kz + 1 k 2 z 1 ) 2 k 3 z + 3 = 1 kz 1 k 2 z k 3 z 1 3 k 4 z +. (2.5.6) 4 Then the inverse to b is the anticipation function, an antiwavelet (, 1 k 3, 1 1 ) k 2,. k Let us check that both the memory function inverse to b, and the anticipation function inverse to b satisfy the equation a b = δ.

43 40 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION (i) memory function ( ) ( ) 1, k, k 2, k 3, 1, k ( ) = 1, k + k, k 2 k 2, ( ) =, 0, 0, = δ, k < 1, (2.5.7) 1 (ii) anticipation function (, 1 k 3, 1 1 k 2, k, 0 ( ) ( ) 1, k ), 0, =, 1 k k 2, k 1 k, ( ) =, 0, 0, 1, 0, 0, ( ) = 1, 0, 0, = δ. (2.5.8) We have therefore solved the deconvolution problem when the output is the result of convolution with a two-length wavelet, either minimum or maximum delay. Let us now attack the general problem. In terms of z-transforms, we have the output Y (z) = B (z) X (z), (2.5.9) where B (z) is the z-transform of an arbitrary filter through which the input has passed. We can solve the deconvolution problem if we can find a filter with z-transform of its impulse response A (z) such that A (z) B (z) = 1. (2.5.10)

44 2.5. EXACT OR IDEAL INVERSES AND DECONVOLUTION 41 Let the( impulse response of the filter through which x has passed be, b 1,, b n ). Then b 0 We can factor B (z) to get or B (z) = b 0 + b 1 z + + b n z n. B (z) = b 0 (1 + k 1 z) (1 + k 2 z) (1 + k n z) (2.5.11) b = b 0 (1, k 1 ) (1, k 2 ) (1, k n ). (2.5.12) The required inverse filter has z-transform 1 A (z) = b 0 (1 + k 1 z) (1 + k 2 z) (1 + k n ) = 1 1 b k 1 z k 2 z k n z. (2.5.13) It is therefore the product of the z-transforms of the stable inverses to the dipoles (1, k 1 ), (1, k 2 ),, (1 + k n ). If b is minimum delay, then k 1, k 2,, k n are < 1. We then have 1 A (z) = b 0 + b 1 z + + b n z = a n 0 + a 1 z + + a n z n + (2.5.14) by polynomial division. If b is maximum delay, then k 1, k 2,, k n are > 1. Then we have 1 A (z) = b 0 + b 1 z + + b n z = + a 3z 3 + a n 2 z 2 + a 1 z 1. (2.5.15) How do we know the inverses in each case to be stable? Or, in other words, how do we know, in each case, that the approximate polynomial division will be convergent?

45 42 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION The answer depends on the Laurent expansion of A (z) = 1 b 0 + b 1 z + + b n z n. The Laurent expansion theorem tells us that we can write a series expansion of the form + c m (z z 0 ) m + c m+1 (z z 0 ) m c 0 + c 1 (z z 0 ) + c 2 (z z 0 ) c n (z z 0 ) n + where the c s are constants, valid in a circular annulus about z = z 0, provided no singularities lie within the annulus. Singularities lying outside the annulus are represented by the coefficients c 1, c 2,, c n,, while singularities lying inside the annulus are represented by the coefficients, c m,, c 1. Now, consider, the Laurent expansion of 1 A (z) = b 0 (1 + k 1 z) (1 + k 2 z) (1 + k n z) = 1 1 b 0 (1 + k 1 z) 1 (1 + k 2 z) 1 (1 + k n z) around z 0 = 0. Suppose k j < 1. Then the factor k j z represents a singularity, or simple pole, at z = 1/k j and since k j < 1, it is outside the unit circle. If k l > 1, the factor will represent a simple pole at z = 1/k l, inside the unit circle. It will, therefore, always be possible to perform a Laurent expansion of A (z) about the origin of the form A (z) = + a m z m + + a 0 + a 1 z + + a n z n +, (2.5.16)

46 2.5. EXACT OR IDEAL INVERSES AND DECONVOLUTION 43 valid in the circular annulus containing the unit circle. If B (z) is minimum delay, all the singularities of A (z) are outside the unit circle, if it is maximum delay, all singularities are inside the unit circle. In general, for mixed delay ( a =, a m,, a 0, a 1,, a n, ( For minimum delay b, a =,, a n, a 0 ). ), a wavelet and pure memory function. For maximum delay b, a = (, a m,, a 1 ), an antiwavelet and pure anticipation( function. ) Suppose, for example, that b = 6, 11, 4. Then, B (z) = z + 4z 2 = (2 + z) (3 + 4z). This gives 1 A (z) = (2 + z) (3 + 4z). The first dipole is minimum delay, the second maximum delay. The z-transform ( of the stable ) ( inverse is ) 1 1 A (z) = 2 (1 + z/2) 4z (1 + 3/4z) ( { 1 = 1 z ( z ) 2 ( }) z ) ( { ( ) 2 ( ) }) z 4z + + 4z 4z ( 1 = z z2 1 ) 16 z3 + ( z z z ) 4 z 1 Thus, a = ( 1 2, 1 4, 1 8, 1 16, ) (, , 9 64, 3 16, 1 ) 4, 0.

47 44 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION Notice the practical problem we get into in trying to construct an exact deconvolution filter. In general, the filter has to be of infinite length, even when b is either minimum or maximum delay. Let us now consider approximate deconvolution filters. 2.6 Approximate Inverses and Deconvolution We begin with the simplest case of all, that where b = (1, k), k < 1. We found the exact inverse to be the wavelet ( ) a = 1, k, k 2, k 3,. (2.6.1) How do we construct an approximate inverse? Suppose we truncate a to the unit length wavelet a (1). Then, a b = (1) (1, k) = (1, k), but it should be (1). The error wavelet is therefore ɛ = (0, k) = (1, 0) (1, k). The latter form shows the error wavelet as the desired unit impulse less the actual wavelet produced by the convolution of a with b. The error energy is k 2 = k 2. Since k is predetermined, there is no way of adjusting the error energy. Suppose we truncate a to the two length wavelet a = (1, k). Then a b = (1, k) (1, k) = ( 1, 0, k 2) and ɛ = (1, 0, 0) ( ) 1, 0, k 2 = ( 0, 0, k 2) and the error energy is k 2 2 = k 4. Since k < 1, the error energy is reduced from the unit length case. In general, we will find that the (n + 1) length wavelet a = ( 1, k, k 2, k 3,, ( k) n) gives error energy k 2(n+1). We now ask,is there a better way of constructing approximate inverses than simple truncation?

48 2.6. APPROXIMATE INVERSES AND DECONVOLUTION 45 Let us suppose that we try to use a = (a 0 ) as the unit length inverse, where we can adjust a 0. Then a b = (a 0 ) (1, k) = (a 0, a 0 k) and ɛ = (1, 0) (a 0, a 0 k) = (1 a 0, a 0 k). The error energy is I = (1 a 0 ) 2 + ( a 0 k) 2 = 1 + a 2 0 2a 0 + a0 2 k 2. (2.6.2) The autocorrelation of b is ( ) ( ) ( ) φ bb = 1, k k, 1 = k, 1 + kk, k = (r 1, r 0, r 1 ), where r 1 = r 1. Thus, I = (1 a 0 ) (1 a 0 ) + ( a 0 k) ( a 0 k) = (1 a 0 ) (1 a 0 ) + a 0a 0 kk =1 a 0 a 0 + a 0 a 0 (1 + kk ) =1 a 0 a 0 + a 0a 0 r 0. (2.6.3) Since a 0 is adjustable, and in general complex, we can minimize the error energy by satisfying Thus, I Rl a 0 = I Im a 0 = 0. I Rl a 0 = 2 + a 0r 0 + a 0 r 0, I Im a 0 = i + i + ia 0 r 0 ia 0 r 0. a 0 r 0 + a 0 r 0 = 2 a 0 r 0 a 0r 0 = 0 (2.6.4)

49 46 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION minimize I (there is no maximum). Adding, we get 2a 0 r 0 = 2 or With this value of a 0, or a 0 = 1 r 0. I = 1 1 r 0 1 r 0 + r 0 r 0 r 0 I = r 0 1 r 0 = k k 2 since r 0 is real. The error energy for the unit length truncated inverse was k 2 > k k 2. Hence, we have improved our result over simple truncation. Now consider the two length approximate inverse. We write a = (a 0, a 1 ). Then and a b = (a 0, a 1 ) (1, k) = (a 0, a 0 k + a 1, a 1 k) ɛ = (1, 0, 0) (a 0, a 0 k + a 1, a 1 k) = (1 a 0, a 0 k a 1, a 1 k). The error energy is I = (1 a 0 ) (1 a 0) + (a 0 k + a 1 ) (a 0k + a 1) + a 1 ka 1k. Recall that ( ) ( ) φ bb = 1, k k, 1 ( = k, 1 + kk ), k = (r 1, r 0, r 1 ).

50 2.6. APPROXIMATE INVERSES AND DECONVOLUTION 47 We can therefore write I =1 a 0 a 0 + a 0a 0 + a 0a 0 kk + a 1 a 0 k + a 1 a 0k + a 1 a 1 + a 1ka 1 k =1 a 0 a 0 + a 0a 0 r 0 + a 1 a 1 r 0 + a 0 a 1 r 1 + a 0 a 1r 1 =1 a 0 a 0 + a 0a 1 r 1 + (a 0 a 0 + a 1 a 1) r 0 + a 0 a 1r 1. (2.6.5) We are now able to adjust the real and imaginary parts of both a 0 and a 1. The error energy is a minimum when the four partial derivatives vanish; I = a 1 r 1 + (a 0 Rl a + a 0) r 0 + a 1 r 1 = 0, 0 I = i + i ia 1 r 1 + i (a 0 Im a a 0) r 0 + ia 1 r 1 = 0, 0 I =a 0 Rl a r 1 + (a 1 + a 1) r 0 + a 0 r 1 = 0, 1 I =ia 0 Im a r 1 + (ia 1 ia 1) r 0 ia 0 r 1 = gives a 1 r 1 + a 0 r 0 = ( ) I I + i Rl a 0 Im a 0 ( ) I I + i Rl a 1 Im a 1 gives a 1 r 0 + a 0 r 1 = 0. Then ( ) ( ) r0 r 1 a0 = r 1 r 0 a 1 ( 1 0 ). (2.6.6) This gives two complex equations in two complex unknowns, equivalent to the four real conditions for the minimization of the error energy.

51 48 CHAPTER 2. OPTIMUM FILTERS AND DECONVOLUTION In an obvious generalization, the (N + 1) length minimum error energy inverse satisfies r 0 r 1 r N r 1 r 0 r N+1 r N r N 1 r 0 a 0 a 1 a N = (2.6.7) Notice that these equations are identical in form to the prediction error equations for unit prediction we derived previously. The error energy for the two length minimum error energy inverse is I =1 a 0 a 0 + a 0a 0 r 0 + a 1 a 1 r 0 + a 0 a 1 + a 0 a 1r 1 =1 a 0 a 0 + a 0 (a 1 r 1 + a 0 r 0 ) + a 1 (a 1 r 0 + a 0 r 1 ) =1 a 0, (2.6.8) after using the equations for the minimum error energy values of a 0, a 1. Solving for a 0, a 1, we get 1 r 1 0 r 0 r a 0 = r 0 = 0 r 1 r0 2 r 1r 1 r 1 r k 2 = k 2 + k 4 k 2 = 1 + k k 2 + k 4, (2.6.9)

52 2.6. APPROXIMATE INVERSES AND DECONVOLUTION 49 and Thus, r 0 1 r 1 0 r 1 a 1 = r0 2 r = 1r 1 r0 2 r 1r 1 k = 1 + k 2 + k 4. (2.6.10) I min = 1 a 0 = k k 2 + k 4. The truncated two length inverse gave I = k 4. k < 1, we are doing considerably better. Compare a 0 to a 1. We get Hence, for a 0 a 1 = 1 + k 2 k = 1 k + k > 1 for either k > 1 or k < 1. Thus, a = delay whether or not b = (1, k) is. ( a 0, a 1 ) is minimum In fact, our approximate deconvolution procedure can be extended to wavelets of arbitrary length. Let us suppose we have a signal which has passed through a filter with impulse response b = (b 0, b 1,, b n ) and we wish to recover the original signal by passing it through the deconvolution filter a = (a 0, a 1,..., a m ). The output is then the approximation c = (c 0, c 1,, c m+n ) to the unit spike δ = (1, 0, 0,, 0). The error signal is then ɛ = (1 c 0, c 1,, c m+n ) and the er-

Attenuation of Multiples in Marine Seismic Data. Geology /23/06

Attenuation of Multiples in Marine Seismic Data. Geology /23/06 Attenuation of Multiples in Marine Seismic Data Geology 377 10/23/06 Marine Seismic Surveying Marine seismic surveying is performed to support marine environmental and civil engineering projects in the

More information

(i) Represent discrete-time signals using transform. (ii) Understand the relationship between transform and discrete-time Fourier transform

(i) Represent discrete-time signals using transform. (ii) Understand the relationship between transform and discrete-time Fourier transform z Transform Chapter Intended Learning Outcomes: (i) Represent discrete-time signals using transform (ii) Understand the relationship between transform and discrete-time Fourier transform (iii) Understand

More information

21.4. Engineering Applications of z-transforms. Introduction. Prerequisites. Learning Outcomes

21.4. Engineering Applications of z-transforms. Introduction. Prerequisites. Learning Outcomes Engineering Applications of z-transforms 21.4 Introduction In this Section we shall apply the basic theory of z-transforms to help us to obtain the response or output sequence for a discrete system. This

More information

Chapter Intended Learning Outcomes: (i) Understanding the relationship between transform and the Fourier transform for discrete-time signals

Chapter Intended Learning Outcomes: (i) Understanding the relationship between transform and the Fourier transform for discrete-time signals z Transform Chapter Intended Learning Outcomes: (i) Understanding the relationship between transform and the Fourier transform for discrete-time signals (ii) Understanding the characteristics and properties

More information

6 The Fourier transform

6 The Fourier transform 6 The Fourier transform In this presentation we assume that the reader is already familiar with the Fourier transform. This means that we will not make a complete overview of its properties and applications.

More information

Let H(z) = P(z)/Q(z) be the system function of a rational form. Let us represent both P(z) and Q(z) as polynomials of z (not z -1 )

Let H(z) = P(z)/Q(z) be the system function of a rational form. Let us represent both P(z) and Q(z) as polynomials of z (not z -1 ) Review: Poles and Zeros of Fractional Form Let H() = P()/Q() be the system function of a rational form. Let us represent both P() and Q() as polynomials of (not - ) Then Poles: the roots of Q()=0 Zeros:

More information

Discrete Fourier Transform

Discrete Fourier Transform Last lecture I introduced the idea that any function defined on x 0,..., N 1 could be written a sum of sines and cosines. There are two different reasons why this is useful. The first is a general one,

More information

How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches

How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches Difference Equations (an LTI system) x[n]: input, y[n]: output That is, building a system that maes use of the current and previous

More information

Acoustic holography. LMS Test.Lab. Rev 12A

Acoustic holography. LMS Test.Lab. Rev 12A Acoustic holography LMS Test.Lab Rev 12A Copyright LMS International 2012 Table of Contents Chapter 1 Introduction... 5 Chapter 2... 7 Section 2.1 Temporal and spatial frequency... 7 Section 2.2 Time

More information

Signals and Spectra - Review

Signals and Spectra - Review Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs

More information

Novel Waveform Design and Scheduling For High-Resolution Radar and Interleaving

Novel Waveform Design and Scheduling For High-Resolution Radar and Interleaving Novel Waveform Design and Scheduling For High-Resolution Radar and Interleaving Phase Phase Basic Signal Processing for Radar 101 x = α 1 s[n D 1 ] + α 2 s[n D 2 ] +... s signal h = filter if h = s * "matched

More information

DISCRETE-TIME SIGNAL PROCESSING

DISCRETE-TIME SIGNAL PROCESSING THIRD EDITION DISCRETE-TIME SIGNAL PROCESSING ALAN V. OPPENHEIM MASSACHUSETTS INSTITUTE OF TECHNOLOGY RONALD W. SCHÄFER HEWLETT-PACKARD LABORATORIES Upper Saddle River Boston Columbus San Francisco New

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

System Identification & Parameter Estimation

System Identification & Parameter Estimation System Identification & Parameter Estimation Wb3: SIPE lecture Correlation functions in time & frequency domain Alfred C. Schouten, Dept. of Biomechanical Engineering (BMechE), Fac. 3mE // Delft University

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Time Series Analysis: 4. Linear filters. P. F. Góra

Time Series Analysis: 4. Linear filters. P. F. Góra Time Series Analysis: 4. Linear filters P. F. Góra http://th-www.if.uj.edu.pl/zfs/gora/ 2012 Linear filters in the Fourier domain Filtering: Multiplying the transform by a transfer function. g n DFT G

More information

Deconvolution imaging condition for reverse-time migration

Deconvolution imaging condition for reverse-time migration Stanford Exploration Project, Report 112, November 11, 2002, pages 83 96 Deconvolution imaging condition for reverse-time migration Alejandro A. Valenciano and Biondo Biondi 1 ABSTRACT The reverse-time

More information

Convolution and Linear Systems

Convolution and Linear Systems CS 450: Introduction to Digital Signal and Image Processing Bryan Morse BYU Computer Science Introduction Analyzing Systems Goal: analyze a device that turns one signal into another. Notation: f (t) g(t)

More information

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 ) Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y

More information

Compensating for attenuation by inverse Q filtering. Carlos A. Montaña Dr. Gary F. Margrave

Compensating for attenuation by inverse Q filtering. Carlos A. Montaña Dr. Gary F. Margrave Compensating for attenuation by inverse Q filtering Carlos A. Montaña Dr. Gary F. Margrave Motivation Assess and compare the different methods of applying inverse Q filter Use Q filter as a reference to

More information

Time Series Analysis: 4. Digital Linear Filters. P. F. Góra

Time Series Analysis: 4. Digital Linear Filters. P. F. Góra Time Series Analysis: 4. Digital Linear Filters P. F. Góra http://th-www.if.uj.edu.pl/zfs/gora/ 2018 Linear filters Filtering in Fourier domain is very easy: multiply the DFT of the input by a transfer

More information

Chirp Transform for FFT

Chirp Transform for FFT Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Lecture No. # 33 Probabilistic methods in earthquake engineering-2 So, we have

More information

Higher-order statistics: improved event resolution?

Higher-order statistics: improved event resolution? Higher-order statistics: improved event resolution? David C. Henley and Arnim B. Haase Higher-order statistics ABSTRACT In any area where near-surface earth structure causes one or more small time delays

More information

ELECTRONOTES APPLICATION NOTE NO Hanshaw Road Ithaca, NY Mar 6, 2015

ELECTRONOTES APPLICATION NOTE NO Hanshaw Road Ithaca, NY Mar 6, 2015 ELECTRONOTES APPLICATION NOTE NO. 422 1016 Hanshaw Road Ithaca, NY 14850 Mar 6, 2015 NOTCH FILTER AS A WASHED-OUT COMB INTRODUCTION: We recently reviewed notch filters [1] and thought of them as a class

More information

Lecture 04: Discrete Frequency Domain Analysis (z-transform)

Lecture 04: Discrete Frequency Domain Analysis (z-transform) Lecture 04: Discrete Frequency Domain Analysis (z-transform) John Chiverton School of Information Technology Mae Fah Luang University 1st Semester 2009/ 2552 Outline Overview Lecture Contents Introduction

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

Digital Signal Processing Prof. T. K. Basu Department of Electrical Engineering Indian Institute of Technology, Kharagpur

Digital Signal Processing Prof. T. K. Basu Department of Electrical Engineering Indian Institute of Technology, Kharagpur Digital Signal Processing Prof. T. K. Basu Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 6 Z-Transform (Contd.) Discussing about inverse inverse Z transform,

More information

Algebraic. techniques1

Algebraic. techniques1 techniques Algebraic An electrician, a bank worker, a plumber and so on all have tools of their trade. Without these tools, and a good working knowledge of how to use them, it would be impossible for them

More information

IV. Covariance Analysis

IV. Covariance Analysis IV. Covariance Analysis Autocovariance Remember that when a stochastic process has time values that are interdependent, then we can characterize that interdependency by computing the autocovariance function.

More information

PART 1. Review of DSP. f (t)e iωt dt. F(ω) = f (t) = 1 2π. F(ω)e iωt dω. f (t) F (ω) The Fourier Transform. Fourier Transform.

PART 1. Review of DSP. f (t)e iωt dt. F(ω) = f (t) = 1 2π. F(ω)e iωt dω. f (t) F (ω) The Fourier Transform. Fourier Transform. PART 1 Review of DSP Mauricio Sacchi University of Alberta, Edmonton, AB, Canada The Fourier Transform F() = f (t) = 1 2π f (t)e it dt F()e it d Fourier Transform Inverse Transform f (t) F () Part 1 Review

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

Digital Signal Processing

Digital Signal Processing Digital Signal Processing Discrete-Time Signals and Systems (2) Moslem Amiri, Václav Přenosil Embedded Systems Laboratory Faculty of Informatics, Masaryk University Brno, Czech Republic amiri@mail.muni.cz

More information

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write 1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

More information

Therefore the new Fourier coefficients are. Module 2 : Signals in Frequency Domain Problem Set 2. Problem 1

Therefore the new Fourier coefficients are. Module 2 : Signals in Frequency Domain Problem Set 2. Problem 1 Module 2 : Signals in Frequency Domain Problem Set 2 Problem 1 Let be a periodic signal with fundamental period T and Fourier series coefficients. Derive the Fourier series coefficients of each of the

More information

Calculation of the sun s acoustic impulse response by multidimensional

Calculation of the sun s acoustic impulse response by multidimensional Calculation of the sun s acoustic impulse response by multidimensional spectral factorization J. E. Rickett and J. F. Claerbout Geophysics Department, Stanford University, Stanford, CA 94305, USA Abstract.

More information

Theory and Problems of Signals and Systems

Theory and Problems of Signals and Systems SCHAUM'S OUTLINES OF Theory and Problems of Signals and Systems HWEI P. HSU is Professor of Electrical Engineering at Fairleigh Dickinson University. He received his B.S. from National Taiwan University

More information

Sound & Vibration Magazine March, Fundamentals of the Discrete Fourier Transform

Sound & Vibration Magazine March, Fundamentals of the Discrete Fourier Transform Fundamentals of the Discrete Fourier Transform Mark H. Richardson Hewlett Packard Corporation Santa Clara, California The Fourier transform is a mathematical procedure that was discovered by a French mathematician

More information

UNIT-II Z-TRANSFORM. This expression is also called a one sided z-transform. This non causal sequence produces positive powers of z in X (z).

UNIT-II Z-TRANSFORM. This expression is also called a one sided z-transform. This non causal sequence produces positive powers of z in X (z). Page no: 1 UNIT-II Z-TRANSFORM The Z-Transform The direct -transform, properties of the -transform, rational -transforms, inversion of the transform, analysis of linear time-invariant systems in the -

More information

Ch. 14 Stationary ARMA Process

Ch. 14 Stationary ARMA Process Ch. 14 Stationary ARMA Process A general linear stochastic model is described that suppose a time series to be generated by a linear aggregation of random shock. For practical representation it is desirable

More information

So far we have limited the discussion to state spaces of finite dimensions, but it turns out that, in

So far we have limited the discussion to state spaces of finite dimensions, but it turns out that, in Chapter 0 State Spaces of Infinite Dimension So far we have limited the discussion to state spaces of finite dimensions, but it turns out that, in practice, state spaces of infinite dimension are fundamental

More information

(Refer Slide Time: )

(Refer Slide Time: ) Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi FIR Lattice Synthesis Lecture - 32 This is the 32nd lecture and our topic for

More information

Doubly Indexed Infinite Series

Doubly Indexed Infinite Series The Islamic University of Gaza Deanery of Higher studies Faculty of Science Department of Mathematics Doubly Indexed Infinite Series Presented By Ahed Khaleel Abu ALees Supervisor Professor Eissa D. Habil

More information

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial. Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and

More information

Implicit 3-D depth migration by wavefield extrapolation with helical boundary conditions

Implicit 3-D depth migration by wavefield extrapolation with helical boundary conditions Stanford Exploration Project, Report 97, July 8, 1998, pages 1 13 Implicit 3-D depth migration by wavefield extrapolation with helical boundary conditions James Rickett, Jon Claerbout, and Sergey Fomel

More information

Lecture 19 IIR Filters

Lecture 19 IIR Filters Lecture 19 IIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/5/10 1 General IIR Difference Equation IIR system: infinite-impulse response system The most general class

More information

Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 21 Square-Integrable Functions

Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 21 Square-Integrable Functions Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Lecture - 21 Square-Integrable Functions (Refer Slide Time: 00:06) (Refer Slide Time: 00:14) We

More information

Solutions Parabola Volume 49, Issue 2 (2013)

Solutions Parabola Volume 49, Issue 2 (2013) Parabola Volume 49, Issue (013) Solutions 1411 140 Q1411 How many three digit numbers are there which do not contain any digit more than once? What do you get if you add them all up? SOLUTION There are

More information

Analog LTI system Digital LTI system

Analog LTI system Digital LTI system Sampling Decimation Seismometer Amplifier AAA filter DAA filter Analog LTI system Digital LTI system Filtering (Digital Systems) input output filter xn [ ] X ~ [ k] Convolution of Sequences hn [ ] yn [

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

Synopsis of Complex Analysis. Ryan D. Reece

Synopsis of Complex Analysis. Ryan D. Reece Synopsis of Complex Analysis Ryan D. Reece December 7, 2006 Chapter Complex Numbers. The Parts of a Complex Number A complex number, z, is an ordered pair of real numbers similar to the points in the real

More information

Maximum Length Linear Feedback Shift Registers

Maximum Length Linear Feedback Shift Registers Maximum Length Linear Feedback Shift Registers (c) Peter Fischer Institute for Computer Engineering (ZITI) Heidelberg University, Germany email address: peterfischer@zitiuni-heidelbergde February 23, 2018

More information

MATH Mathematics for Agriculture II

MATH Mathematics for Agriculture II MATH 10240 Mathematics for Agriculture II Academic year 2018 2019 UCD School of Mathematics and Statistics Contents Chapter 1. Linear Algebra 1 1. Introduction to Matrices 1 2. Matrix Multiplication 3

More information

1 First-order di erence equation

1 First-order di erence equation References Hamilton, J. D., 1994. Time Series Analysis. Princeton University Press. (Chapter 1,2) The task facing the modern time-series econometrician is to develop reasonably simple and intuitive models

More information

4. The Green Kubo Relations

4. The Green Kubo Relations 4. The Green Kubo Relations 4.1 The Langevin Equation In 1828 the botanist Robert Brown observed the motion of pollen grains suspended in a fluid. Although the system was allowed to come to equilibrium,

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

Very useful for designing and analyzing signal processing systems

Very useful for designing and analyzing signal processing systems z-transform z-transform The z-transform generalizes the Discrete-Time Fourier Transform (DTFT) for analyzing infinite-length signals and systems Very useful for designing and analyzing signal processing

More information

EXAMPLES OF PROOFS BY INDUCTION

EXAMPLES OF PROOFS BY INDUCTION EXAMPLES OF PROOFS BY INDUCTION KEITH CONRAD 1. Introduction In this handout we illustrate proofs by induction from several areas of mathematics: linear algebra, polynomial algebra, and calculus. Becoming

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2.0 THEOREM OF WIENER- KHINTCHINE An important technique in the study of deterministic signals consists in using harmonic functions to gain the spectral

More information

5 Transfer function modelling

5 Transfer function modelling MSc Further Time Series Analysis 5 Transfer function modelling 5.1 The model Consider the construction of a model for a time series (Y t ) whose values are influenced by the earlier values of a series

More information

(Refer Slide Time: 01:28 03:51 min)

(Refer Slide Time: 01:28 03:51 min) Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture 40 FIR Design by Windowing This is the 40 th lecture and our topic for

More information

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 13 CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME When characterizing or modeling a random variable, estimates

More information

RANDOM WALKS IN ONE DIMENSION

RANDOM WALKS IN ONE DIMENSION RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

(x 1, y 1 ) = (x 2, y 2 ) if and only if x 1 = x 2 and y 1 = y 2.

(x 1, y 1 ) = (x 2, y 2 ) if and only if x 1 = x 2 and y 1 = y 2. 1. Complex numbers A complex number z is defined as an ordered pair z = (x, y), where x and y are a pair of real numbers. In usual notation, we write z = x + iy, where i is a symbol. The operations of

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Review: Continuous Fourier Transform

Review: Continuous Fourier Transform Review: Continuous Fourier Transform Review: convolution x t h t = x τ h(t τ)dτ Convolution in time domain Derivation Convolution Property Interchange the order of integrals Let Convolution Property By

More information

Z-Transform. x (n) Sampler

Z-Transform. x (n) Sampler Chapter Two A- Discrete Time Signals: The discrete time signal x(n) is obtained by taking samples of the analog signal xa (t) every Ts seconds as shown in Figure below. Analog signal Discrete time signal

More information

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e Transform methods Some of the different forms of a signal, obtained by transformations, are shown in the figure. X(s) X(t) L - L F - F jw s s jw X(jw) X*(t) F - F X*(jw) jwt e z jwt z e X(nT) Z - Z X(z)

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

1 Introduction. or equivalently f(z) =

1 Introduction. or equivalently f(z) = Introduction In this unit on elliptic functions, we ll see how two very natural lines of questions interact. The first, as we have met several times in Berndt s book, involves elliptic integrals. In particular,

More information

Tutorial 9 The Discrete Fourier Transform (DFT) SIPC , Spring 2017 Technion, CS Department

Tutorial 9 The Discrete Fourier Transform (DFT) SIPC , Spring 2017 Technion, CS Department Tutorial 9 The Discrete Fourier Transform (DFT) SIPC 236327, Spring 2017 Technion, CS Department The DFT Matrix The DFT matrix of size M M is defined as DFT = 1 M W 0 0 W 0 W 0 W where W = e i2π M i =

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

6.12 System Identification The Easy Case

6.12 System Identification The Easy Case 252 SYSTEMS 612 System Identification The Easy Case Assume that someone brings you a signal processing system enclosed in a black box The box has two connectors, one marked input and the other output Other

More information

Invitation to Futterman inversion

Invitation to Futterman inversion Invitation to Futterman inversion Jon Claerbout ABSTRACT A constant Q earth model attenuates amplitude inversely with the number of wavelengths propagated, so the attenuation factor is e ω (z/v)/q. We

More information

Use: Analysis of systems, simple convolution, shorthand for e jw, stability. Motivation easier to write. Or X(z) = Z {x(n)}

Use: Analysis of systems, simple convolution, shorthand for e jw, stability. Motivation easier to write. Or X(z) = Z {x(n)} 1 VI. Z Transform Ch 24 Use: Analysis of systems, simple convolution, shorthand for e jw, stability. A. Definition: X(z) = x(n) z z - transforms Motivation easier to write Or Note if X(z) = Z {x(n)} z

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

Correlator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi

Correlator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi Chapter 8 Correlator I. Basics D. Anish Roshi 8.1 Introduction A radio interferometer measures the mutual coherence function of the electric field due to a given source brightness distribution in the sky.

More information

Two-Dimensional Systems and Z-Transforms

Two-Dimensional Systems and Z-Transforms CHAPTER 3 Two-Dimensional Systems and Z-Transforms In this chapter we look at the -D Z-transform. It is a generalization of the -D Z-transform used in the analysis and synthesis of -D linear constant coefficient

More information

Chemical Process Dynamics and Control. Aisha Osman Mohamed Ahmed Department of Chemical Engineering Faculty of Engineering, Red Sea University

Chemical Process Dynamics and Control. Aisha Osman Mohamed Ahmed Department of Chemical Engineering Faculty of Engineering, Red Sea University Chemical Process Dynamics and Control Aisha Osman Mohamed Ahmed Department of Chemical Engineering Faculty of Engineering, Red Sea University 1 Chapter 4 System Stability 2 Chapter Objectives End of this

More information

Module 1: Signals & System

Module 1: Signals & System Module 1: Signals & System Lecture 6: Basic Signals in Detail Basic Signals in detail We now introduce formally some of the basic signals namely 1) The Unit Impulse function. 2) The Unit Step function

More information

Notes: Pythagorean Triples

Notes: Pythagorean Triples Math 5330 Spring 2018 Notes: Pythagorean Triples Many people know that 3 2 + 4 2 = 5 2. Less commonly known are 5 2 + 12 2 = 13 2 and 7 2 + 24 2 = 25 2. Such a set of integers is called a Pythagorean Triple.

More information

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach. Wiener filter From Wikipedia, the free encyclopedia In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949. [1] Its purpose is to reduce the

More information

Introduction to Decision Sciences Lecture 6

Introduction to Decision Sciences Lecture 6 Introduction to Decision Sciences Lecture 6 Andrew Nobel September 21, 2017 Functions Functions Given: Sets A and B, possibly different Definition: A function f : A B is a rule that assigns every element

More information

UNIT 1. SIGNALS AND SYSTEM

UNIT 1. SIGNALS AND SYSTEM Page no: 1 UNIT 1. SIGNALS AND SYSTEM INTRODUCTION A SIGNAL is defined as any physical quantity that changes with time, distance, speed, position, pressure, temperature or some other quantity. A SIGNAL

More information

Introduction to Signal Processing

Introduction to Signal Processing to Signal Processing Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Intelligent Systems for Pattern Recognition Signals = Time series Definitions Motivations A sequence

More information

On non-stationary convolution and inverse convolution

On non-stationary convolution and inverse convolution Stanford Exploration Project, Report 102, October 25, 1999, pages 1 137 On non-stationary convolution and inverse convolution James Rickett 1 keywords: helix, linear filtering, non-stationary deconvolution

More information

ENT 315 Medical Signal Processing CHAPTER 2 DISCRETE FOURIER TRANSFORM. Dr. Lim Chee Chin

ENT 315 Medical Signal Processing CHAPTER 2 DISCRETE FOURIER TRANSFORM. Dr. Lim Chee Chin ENT 315 Medical Signal Processing CHAPTER 2 DISCRETE FOURIER TRANSFORM Dr. Lim Chee Chin Outline Introduction Discrete Fourier Series Properties of Discrete Fourier Series Time domain aliasing due to frequency

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

Stochastic Processes. A stochastic process is a function of two variables:

Stochastic Processes. A stochastic process is a function of two variables: Stochastic Processes Stochastic: from Greek stochastikos, proceeding by guesswork, literally, skillful in aiming. A stochastic process is simply a collection of random variables labelled by some parameter:

More information

Multidimensional digital signal processing

Multidimensional digital signal processing PSfrag replacements Two-dimensional discrete signals N 1 A 2-D discrete signal (also N called a sequence or array) is a function 2 defined over thex(n set 1 of, n 2 ordered ) pairs of integers: y(nx 1,

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e. Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the

More information

Linear Convolution Using FFT

Linear Convolution Using FFT Linear Convolution Using FFT Another useful property is that we can perform circular convolution and see how many points remain the same as those of linear convolution. When P < L and an L-point circular

More information

13.1 Convolution and Deconvolution Using the FFT

13.1 Convolution and Deconvolution Using the FFT 538 Chapter 13. Fourier and Spectral Applications 13.1 Convolution and Deconvolution Using the FFT We have defined the convolution of two functions for the continuous case in equation (12..8), and have

More information