Direct multichannel predictive deconvolution

Size: px
Start display at page:

Download "Direct multichannel predictive deconvolution"

Transcription

1 GEOPHYSICS, VOL. 7, O. MARCH-APRIL 7; P. H H7, 8 FIGS., TABLES..9/. Direct multichannel predictive deconvolution Milton J. Porsani and Børn Ursin ABSTRACT The Levinson principle generally can be used to compute recursively the solution of linear equations. It can also be used to update the error terms directly. This is used to do single-channel deconvolution directly on seismic data without computing or applying a digital filter. Multichannel predictive deconvolution is used for seismic multiple attenuation. In a standard procedure, the prediction-error filter matrices are computed with a Levinson recursive algorithm, using a covariance matrix of the input data. The filtered output is the prediction errors or the nonpredictable part of the data. Starting with the classical Levinson recursion,we have derived new algorithms for direct recursive calculationof the prediction errors without computing the data covariance-matrix or computing the prediction-error filters. One algorithm generates recursively the one-step forward and backward predic-tion errors and the L-step forward prediction error, computing only the filter matrices with the highest index. A numerically more stable algorithm uses reduced QR decomposition or singularvalue decomposition SVD in a direct recursive computation of the prediction errors without computing any filter matrix. The new, stable, predictive algorithms require more arithmetic operations in the computer, but the computer programs and data flow are much simpler than for standard predictive deconvolution. ITRODUCTIO Deconvolution is one of the most used techniques for processing seismic reflection data. It is applied to improve temporal resolution by wavelet shaping and removal of short-period reverberations Robinson and Treitel, 98; Yilmaz, 987; Leinbach, 99. Prediction-error filtering can be classified as either spiking deconvolution or as predictive deconvolution Robinson, 97; Treitel and Robinson, 9; Peacock and Treitel, 99. Spiking deconvolution works extremely well for minimum-phase wavelets. Otherwise, a wavelet-shaping filter may be applied using a known wavelet Berkhout, 977 or estimating the wavelet and its inverse filter from data numerous techniques can be found in Osman and Robinson, 99 and Robinson and Osman, 99; see also Porsani and Ursin, and Ursin and Porsani,. By using the convolutional model to represent the seismic trace Yilmaz, 987, and by considering the classical assumptions of the Wiener-Levinson deconvolution method the seismic wavelet is minimum phase the signal-to-noise ratio is high the reflectivity series is represented by an uncorrelated random process then the one-step forward prediction-error filter represents a causal least-squares estimate of the inverse of the minimum-phase wavelet, and the deconvolved trace approaches the reflectivity series Robinson, 97; Berkhout, 977; Leinbach, 99. Predictive deconvolution is commonly done by calculating the coefficients in the prediction-error filter, and then applying the filter to the data. The output, or filtered trace, is the prediction errors. The filter thus removes the predictable part of the input data. Leastsquares estimation of the filter coefficients results in a system of normal equations which, in the single-channel case, has a Toeplitz structure. This enabled Levinson 97 to obtain an efficient recursive algorithm for the computation of the filter coefficients. Further discussion of single-channel algorithms is given by Porsani 99. Levinson s principle Levinson, 97 consists of a recursive solution of an equation of order using a linear combination of the for- Peer-reviewed code related to this article can be found at Manuscript received by the Editor January, ; revised manuscript received September, ; published online March, 7. Universidade Federal da Bahia, Campus Universitário da Federação, Programa de Pesquisa e Pós-Graduaçao em Geofísica, Instituto de Geociências, Salvador, Bahia, Brazil. porsani@cpgg.ufba.br. The orwegian University of Science and Technology, Department of Petroleum Engineering and Applied Geophysics, Trondheim, orway. born.ursin@ntnu.no. 7 Society of Exploration Geophysicists. All rights reserved. H

2 H Porsani and Ursin ward and backward solutions of subsystems of lesser order. This basic principle has many useful applications even when the system of equations is not Toeplitz Morf et al., 977; Marple, 98; Porsani and Ulrych, 99. Symmetric or nonsymmetric Toeplitz systems, tridiagonal, Hessenberg, Vandermonde, and Hankel systems can be solved recursively with O multiply and divide operations. In the general case of a simple symmetric matrix with no special structure, the application of Levinson s principle gives us the solution with O operations. We explain the basic Levinson principle by considering the least-squares solution of a simple over-determined linear system of equations. The errors in the equations also can be computed recursively using the same type of algorithm. In standard prediction deconvolution, the autocorrelation function of the data is used to form the normal equations which we solved using Levinson recursion. For an L-step prediction-error filter, it is first necessary to compute the one-step forward and backward prediction-error filters. We review the equations for single-channel predictive deconvolution, then we show how Levinson recursion can be used to compute the prediction errors the output of the predictive deconvolution process directly from the data. This does not require the computation of the autocorrelation function, the filter coefficients, nor applying the filter to the data. Multichannel predictive deconvolution consists of the same steps as single-channel predictive deconvolution, but all scalars are replaced by matrices, see Treitel 97. In a standard procedure, the prediction error filter matrices are computed with a Levinson recursive algorithm solving a block-toeplitz system of normal equations Wiggins and Robinson, 9; Claerbout, 97; Robinson, 977. This system is formed with coefficients of autocorrelation and crosscorrelations of a set of seismic traces. The obtained filters can explore in a more effective way the in-time and spatial redundancy that exist in the seismograms. The filtered output is the prediction errors or the nonpredictable part of the data. Thus, multichannel predictive deconvolution may be applied to the prediction and subtraction of long-period multiple reflections Galbraith and Wiggins, 98; Cassano and Rocca, 97; Taner, 98; Koehler and Taner, 98; Taner et al., 99; Dragoset and Jericevic, 998; Lokshtanov, 998; Rosenberger et al., 999. One of the difficulties in using the classical multichannel method is the computational implementation, where three steps are involved: generation of the matrices coefficients of the block-toeplitz normal equations, solution of the linear system of equations, and application of the filter matrices to the data. A second problem exists with the numerical instability of the least-squares solution when input channels are similar. These two factors are probably the main reasons for the small number of applications of the multichannel predictive-deconvolution method in the past years. Instead of computing the filter coefficients and filtering the data, we can directly compute the prediction errors. In order to develop the new algorithm, we need several concepts from the classical theory. We therefore start by reviewing the multichannel Levinson algorithm for one-step forward and backward prediction-error filters, which are needed for the L-step forward prediction-error filter. In Appendix A, it is shown that the covariance matrix of the data, which is symmetric and block-toeplitz, can be represented by two blockmatrix decompositions in terms of the filter matrices. The Levinson recursion for the filter matrices can be used to derive a recursion for the one-step forward and backward prediction errors directly. Only the filter matrices with highest index are needed in each step. Both the new and the classical Levinson algorithm require the inversion of the covariance matrices of the one-step forward and backward prediction errors in each step. This matrix inversion and the computation of any filter coefficient can be avoided by using a reduced QR decomposition or a reduced singular-value decomposition SVD Golub and Van Loan, 99; Trefethen and Bau, 997, as explained in Appendix B, where some properties of linear least-squares estimation have been given. The same decomposed matrices can also be used in a direct recursion for the L-step prediction error. A synthetic-data example demonstrates the effectiveness of multichannel predictive deconvolution in suppressing multiple reflections. THE BASIC LEVISO PRICIPLE To illustrate the basic Levinson principle that is being used in the algorithms presented in this paper, we consider an M system of linear equations M, to be solved in the least-squares sense, x y h h = d. The error vector between the desired data d and the calculated values may be represented as e = d x y h h. By minimizing the quadratic form Qh,h = e T e, we obtain the corresponding system of normal equation xt x x T y y T x y y h T h = xt d y T d. By solving equation, we obtain the least-squares solution of equation. Evaluating the quadratic form, using the least-squares solution, we obtain the minimized sum of squared errors given by minqh,h = d T d d T x d T y h h = P. Equations and may be combined to represent the expanded form of the normal equations, dtd dtx dty x T d x T x x T y h P = y T d y T x y T y h. The main idea behind the Levinson-type algorithm is to compute the solution of order based on the linear combination of two independent solutions of order, associated with two subsystems of lesser order. For the sake of simplicity, let us rewrite equation as s t w u t v z a, u z a, P =. The coefficients a,,a, = h,h represent the least-squares so-

3 Multichannel predictive deconvolution H lution of equation. The solution of order may be obtained from the solutions a, and b,, of order, as follows. Let us obtain a, and b, associated with two independent forward and backward solutions, for the first equation to be solved in equation, t v z a, b, =. 7 Solving equation 7, we obtain a, = v t and b, = v z. Any linear combination of these two independent solutions satisfies the first equation, t v z a, b, = =. 8 We may set = and compute such that the last part of equation is satisfied, =. 9 u z w a, b, = Q Solving for results in = u z a, z w b, = Q. By using the two independent solutions and setting = a,,wemay complete the solution of equation as a, a, = a, b, a,. This is the Levinson recursion. The same procedure may be used to increase the solution from order to. The Levinson recursion was originally used to solve a symmetric Toeplitz system of normal equations Levinson, 97. By imposing t = z in our previous example, we may verify that a, = b,, and the Levinson recursion may be simplified. Levinson recursion applied to direct error computation Using the Levinson recursion in equation, we may obtain the prediction error equation, from the forward and backward prediction errors of order, y e a, = d x a, b, a, = e a, e b, a,. If the forward and backward errors are available, then only the coefficient a, needs to be calculated. The least-squares solution of equation is a, = e b, T e a, e T =, b, e b, Q the same result obtained using equation. Applying the Levinson recursion directly to the system of linear equations results in a method to build orthogonality between the prediction errors, using two independent and orthogonal vectors. ote that e a, and e b, are orthogonal to x, and e a, is orthogonal to x and y, simultaneously. This basic idea has been exploited intensively in the design of singlechannel and multichannel Levinson-type algorithms Burg, 97; Morf et al., 977; Marple, 98; Porsani, 99. In the next section, we present the classical Levinson algorithm used to solve the symmetric Toeplitz system of equations, associated with the one-step prediction error filter, which is used in the well known Wiener-Levinson minimum-phase deconvolution method Robinson, 97; Treitel and Robinson, 9; Treitel and Robinson, 9; Berkhout, 977; Lines and Ulrych, 977; Robinson and Treitel, 98; Leinbach, 99; Ulrych et al., 99. SIGLE-CHAEL PREDICTIO-ERROR FILTERS One-step forward and backward prediction-error filters The seismic trace, x t, may be represented by a stationary, autoregressive model, x t = x t a, e t, = where a, are the coefficients in the forward prediction-error filter, and e t are the forward prediction errors. The seismic traces are assumed to have zero-mean value. Minimizing the sum of the squared prediction errors results in an expanded system of normal equations, r r r r r r r r r r a, =P a,, where r = x t x t, =,...,, represent the coefficients of the autocorrelation function of the seismic trace; a, a, T is the prediction-error filter; and P is the minimized sum of squared errors. In order to solve the normal equations by Levinson recursion, we also need the backward prediction-error filter b,, which acts on the data x t = x t b, e t = to produce the backward prediction errors e t. Minimizing the sum of the squared prediction errors gives a new system of expanded normal equations:

4 H Porsani and Ursin = r r r r r r r b b,, r r r Q, where Q is the minimum of the squared backward prediction errors. The system of normal equations, for the order, may be represented in the expanded form as R b a = P Q, 7 where a = a, a, T and b = b, b, T, and P and Q are the forward and backward prediction-error variance; R is the symmetric Toeplitz matrix formed by the autocorrelation coefficients r i = r i : r r r r R =r. 8 r r r r As a consequence of the double symmetry, with respect to the main and to the secondary diagonal, with J R J = R, 9 J = being the reverse identity matrix of order and J = I. From equation 7, we obtain, J R J J b a = J P Q or R J a J b = Q P. By comparing this equation with equation 7, it is seen that P = Q and a = J b, b = J a, implying that a, = b,. Taking the symmetry of the filter coefficients into account, the Levinson recursion for the combined forward and backward onestep filters reduces to b a = a b. a, a, By considering the known two independent solutions of order, we may use the Levinson recursion to increase the order of the solution. We shall solve the extended normal equation 7 for the forward and backward one-step prediction filter simultaneously. Premultiplying equation with R,defined in equation 8, gives the combined normal equation 7 of order on the left side, and the same equations of order plus additional terms on the right side. The result is where P Q =P Q a,, a, = r r k a,k, k= = r r k b,k. k= By considering the symmetry of the Toeplitz matrix, it may be shown that =, so we denote =. Removing the zeros in equation then gives two identical scalar equations with the solution a, = P. This result is used in equation to increase the order of the solution. The total sum of the squared prediction errors must also be updated: P = P a, = P a,. 7 L-step forward prediction-error filter The L-step forward prediction error is defined as ẽ,tl = x tl x t c,, t =,...,M L, 8 = where c, are coefficients of the L-step forward prediction filter. Minimizing the sum of the squared prediction errors gives the extended normal equations r rl rl r L r r r r r r r L r r r c, c,=p, 9 where P is the minimized sum of the squared errors corresponding to the L-step predictive filter

5 Multichannel predictive deconvolution H P = r r L c, = minẽ T ẽ, = where ẽ = x x L ẽ,l ẽ ML T represent the deconvolved trace. Assume that we have computed the backward one-step predictive filter and the forward L-step predictive filter, both of order ; then the Levinson recursion is c = c b c,, where c = c,... c, T. Premultiply with the matrix in the extended normal equation 9 for in order to obtain the compact representation for the extended normal equation with P b, =P c,, P = r L r K c,k, k= b, = r L r Lk b,k. k= As a consequence of the symmetry of the expanded normal equations, it may be shown that b, =. We need only the solution of the last equation in, which gives c, = P. ote that P is not required to build the solution; however, it may be used to monitor the construction of the L-step prediction filter Porsani, 99. In order to solve the Levinson recursion equation, we must first solve for the one-step forward and backward prediction filters, as described in the previous section, to get b and P. Algorithm The classic single-channel deconvolution consists of three steps: Compute the autocorrelation function r, =,,..., of the data. Compute the single-channel, one-step, and L-step predictionerror filters as follows: Initialization: P = r and c = Q r L For =,,, compute = using equation solve equation for a, update the total sum of the squared errors using equation 7 use Levinson recursion equation to increase the order of the one-step filters compute using equation compute c, using equation use the Levinson recursion equation to increase the order of the L-step filter Apply the appropriate filter to the data to obtain the desired prediction errors. Input seismic traces are used to estimate the coefficients of the autocorrelation function. The one-step or L-step predictive filters are calculated by solving the system of normal equations using the Levinson recursion. The causal forward one-step prediction-error filter may be convolved with a set of seismic traces to get the results of the Wiener-Levinson minimum-phase wavelet-deconvolution method. Also, the L-step prediction-error filter may be calculated and applied to attenuate multiple reflections. SIGLE-CHAEL, DIRECT PREDICTIO-ERROR COMPUTATIO One-step prediction-error computation For a single channel, the forward and backward prediction errors for the one-step prediction-error filter of order can be represented in terms of matrix notation as e, e, e,m e,m or e, e, e,m e,m = x x x x M x M x M e e = X b, a, b, a, b, a. When the Levinson recursion equation is multiplied by the data matrix, X, a new Levinson recursion for the one-step prediction errors is obtained: e e = e e a, a,. 7 The coefficient a, may be obtained in the least-squares sense by minimizing the quadratic form associated with the forward or backward errors, e and e. This again gives two identical equations for a, as given in equation, but now = e T e. 8 By using this expression in equation 7 and rearranging, we also obtain e = I q q T e,

6 H Porsani and Ursin e = I p p T e, 9 where p and q are the normalized forward and backward one-step prediction errors p T = e e T, q T = e e T. L-step prediction-error computation The L-step prediction errors of order may be represented as ẽ,l ẽ,l ẽ,l = ẽ ẽ,ml xl x L x x L x x c, x M x M c,. x M ẽ,t represents the deconvolved trace using a prediction filter of coefficients and prediction distance L. We note that the prediction-error filter has L null coefficients between the coefficients c, = and c,, omitted in equation, and implying that the first L samples of the deconvolved trace should be preserved, ẽ,t = x t for tl. In terms of matrix-vector notation, we write this equation as ẽ = x L, X c. Multiplying the modified data matrix with the Levinson recursion equation results in a new recursion c,. ẽ = ẽ e The least-squares solution of this equation again gives c,,as in equation, now with T = e ẽ. When this expression is used in the recursion in equation, a direct expression for the L-step prediction error is obtained ẽ = I q q T ẽ, here, q is given in equation. Algorithm An algorithm for the direct computation of the prediction errors is as follows: Initialization: e = e = x x M T ẽ = x L x M T P = e T e compute using equation solve equation for c, compute ẽ from equation For =,..., compute using equation 8 solve equation for a, update the total sum of the squared errors using equation 7 update the forward and backward one-step prediction errors using equation 7 compute using equation solve equation for c, update the forward L-step prediction error using equation The algorithm allows the direct computation of the forward L-step and the forward and backward one-step prediction error without the knowledge of the predictive filter. The autocorrelation coefficients and the filters are not explicitly required. Burg 97 used a similar algorithm to estimate the autocorrelation function using the coefficients a, and the expressions of the Levinson algorithm. The vector ẽ corresponds to the output of the predictive deconvolution using the Wiener-Levinson filter of coefficients. The vector e corresponds to the output of the minimum-phase inverse filter. The input seismic data are used to estimate only the last coefficients of the filters. The deconvolved trace is recursively updated, iterating from =to =. All of the intermediated deconvolved results may be available and may be used for monitoring the performance of the filtering process. In the classical approach, the filter may be designed from a subset of traces and applied in all panels or sections. Similarly, in the direct deconvolution approach, a subset of traces, extended with samples of zero values, may be placed forming a long vector, and used as a long trace in the direct deconvolution method. If the one-step and/or the L-step filters are necessary, they may be generated by means of the Levinson recursion over the coefficients a,,b,, and c,. MULTICHAEL, OE-STEP PREDICTIO-ERROR FILTERS One-step forward prediction-error filter We consider M time samples of K data channels, with zero mean, collected in the M K data matrix x x K X = x M x KM x = M, x where the elements x t are K row vectors. The one-step forward prediction of the data using KK filter blocks A, is denoted by the K row vector

7 Multichannel predictive deconvolution H7 xˆ t = x t A,. = The forward prediction error is the K row vector e,t 7 = x t xˆ t. 8 This can be collected in the matrix expression e, x x E e,m x M = = M I A, x M x A e,m x, or X =X X I E = X A, A, A, 9 where the extended data matrix X consists of data blocks X, defined in equation, each block being shifted one row below the one to the left. The filter blocks A, are computed minimizing the sum of the squared prediction errors. This corresponds to minimizing TrE T E = TrA T X T X A, where tr denotes the trace of a matrix the sum of the diagonal elements. This gives the extended normal equation for the forward prediction filter I A, R =P A,, where R is the covariance matrix, R R R = X T R R X =R, R R R R where the KK blocks in the covariance matrix are M R = x T t x t. t= We note that R is symmetric and block Toeplitz, but not block symmetric, because R T = R. P is the covariance matrix of the minimized forward-prediction error P = R R i A,i = mintre T E. i= One-step backward prediction-error filter ext, we consider the one-step backward-prediction filter with K K filter blocks B,. The backward prediction error is the K row vector e,t which can be collected in the matrix E = e, e,m e,m = = x t x t B,, = x x x M x M x B, I x MB, = XB, 7 where the extended data matrix X is the same as in equations 9 and. Minimizing the sum of the squared backward-prediction errors results in the extended normal equation R B, B, I = Q, 8 where Q is the covariance matrix of the minimized backward-prediction errors Q = R R i B,i = mintre T E. i= Levinson recursion 9 The two sets of extended normal equations and 8 can be combined into I B, A, =P R B, A, I Q. Analogously to the single-channel case, by considering the two known independent solutions of order, we may use the Levinson recursion to increase the order of the solution. We shall solve the extended normal equation simultaneously,

8 H8 Porsani and Ursin = I B, I A, A, B, I B,. A, I B, A, B, A, I I Premultiplying with R gives the combined normal equation of order on the left side, and the same equation of order plus additional terms on the right side. The result is where P Q =P Q I B,, A, I = R R k A,k, k= = R R k B,k. k= It will be shown later that = T, so we denote =. Removing the zeros in equation then gives a compact representation of the expanded normal equation for order P Q = P T Q I B,. A, I The compact form needs to be solved as A, = Q, B, = P T. These results are used in equation to increase the order of the solution. The covariance matrices for the forward and backward prediction error may be updated as P = P T T A, = P Q = P I B, A,, Q = Q B, = Q P T = Q I A, B,. THE MULTICHAEL L-STEP PREDICTIO-ERROR FILTER The L-step forward prediction error is defined as ẽ,tl = x tl x t C, t =,...,M L, 7 = where C, are KK K K matrix coefficients of the L-step forward prediction-error filter for K channels. This can be collected in the matrix expression Ẽ = ẽ,l X =XL ẽ,ml X Ĩ where Ĩ is the identity matrix of order K and x L X L =, x M C, C,, 8 is of dimension M L K. Minimizing the sum of the squared prediction errors gives the extended normal equations for the L-step forward prediction-error filter R R L R L Ĩ R L R R R R R R R L R R R C, C,=P, 9 where R is of dimension K K, R Li,i =,..., is of dimension T KK, R Li = R Li. P is the covariance matrix of the minimized forward L-step prediction errors, P = R R L C, = mintrẽ T Ẽ. = Levinson recursion 7 Assume that we have computed the backward one-step prediction-error filter and the forward L-step prediction-error filter, both of order, then the Levinson recursion is C, Ĩ Ĩ C, B, C,= C, B, I Ĩ C,. 7 Premultiply with the matrix in the extended normal equation 9 for order to obtain the compact representation for the extended normal equation 9 P T Ĩ =P Q 7 C,,

9 Multichannel predictive deconvolution H9 with = R L R K C,k, k= T = R L R Lk B,k. k= 7 We need only the solution of the last set of equations in equation 7, which gives C, = Q. 7 In order to solve the extended normal equation 9, we must first solve for the one-step forward and backward prediction-error filters, as described in the previous section, to get B and Q. Then the Levinson recursion in equation 7, combined with equations 7 and 7, gives the L-step forward prediction-error filter for =,,...,. Classical multichannel predictive deconvolution The classical algorithm for multichannel predictive deconvolution is illustrated in Figure. It consists of the following steps: Compute the autocorrelation matrices R in equation 9 using the input data Compute the one-step and L-step prediction-error filters as summarized below Initialization: Q = P = R compute C = Q R L For =,...,, compute = using equation solve equation for A, and B, update the covariance error matrices using equation use the Levinson recursion equation to update the one-step prediction-error filters compute using equation 7 solve equation 7 for C, use the Levinson recursion equation 7 to update the L-step prediction-error filter For K =, equation 9 is solved for a single column of C,k prediction of a single channel using K input channels. Then, is a column vector. Apply the one-step and/or L-step prediction-error filter to the data. DIRECT, MULTICHAEL, PREDICTIO-ERRORS COMPUTATIO Forward and backward one-step prediction errors As for the single-channel case, it is possible to compute the multichannel forward and backward prediction errors recursively. Premultiplying equation with the expanded data matrix X,defined in equation, gives E E = E E The set of equations related with A, is E = E E I B, A, I. 7 I A,. 7 The coefficients of the matrix A, may be obtained by minimizing the sum of the squared prediction errors, which is equivalent to minimizing tre T E. This gives the system of normal equations E T E A, = E which we recognize as in equation with = A, = Q, E T T E, E. 79 In the same way, the equations related with B, in equation 7 are or E T E B, = E T E 8 B, = P T. 8 By comparing equation with equations 78 and 8, it is seen that = and = T, as used in equation. The algorithm for direct computation of the multichannel onestep forward and backward prediction errors is given below. L-step forward prediction errors For direct computation of the L-step prediction errors, we multiply the Levinson relationship equation 7 with the modified ex- Data Compute ACF ACF Apply filter A Loop over =,..., Compute: A i, B i, C i, i =,... C Apply filter Figure. Classical predictive deconvolution. One-step forwardprediction error L-step forwardprediction error

10 H Porsani and Ursin tended data matrix in equation 8. This gives, directly, = Ĩ Ẽ C,. Ẽ E 8 The least-squares solution may be obtained by solving the equation E T E C, = E T Ẽ, 8 which is equivalent to equation 7. By comparing equation 8 with equation 7, it is seen that = E T Ẽ. 8 Algorithm The algorithm for direct computation of the multichannel L-step prediction errors is given below, and is also illustrated in Figure. Initialization: E = E = X, Ẽ = X L compute Q = P = E T E compute using equation 8 solve equation 7 for C, = Q compute Ẽ from equation 8 For =,..., compute using equation 79 solve equations 78 and 8 for A, and B, update the covariance error matrices, P and Q, using equation update the forward and backward prediction-error matrices using equation 7 compute using equation 8 solve equation 7 for C, use equation 8 to update the L-step prediction errors STABLE DIRECT COMPUTATIO OF PREDICTIO ERRORS Instead of solving for the matrices A, and B, by solving the normal equations 77 and 8, we may compute the prediction errors directly by using a reduced QR decomposition, as explained in Data Loop over =,..., Compute: A,, B,, E, E-, ~ C, and E E Figure. Direct predictive deconvolution. ~ E One-step forwardprediction error L-step forwardprediction error Appendix B. We compute the QR decomposition Golub and Van Loan, 99; Trefethen and Bau, 997 and E = Q R, 8 E = Q R, 8 where Q and Q are M K orthonormal matrices, and R and R are KK upper triangular matrices. These matrices should not be mistaken for the covariance matrices in the rest of the paper. By using the properties of these matrices, the one-step forward and backward error matrices may be updated directly by and E E = I Q Q T E 87 = I Q Q T E. 88 These are the least-squares error solutions of equation 7 as explained in Appendix A. An alternative to QR decomposition is to use a reduced SVD technique, as discussed in Appendix B. In this case, we compute the SVD and E = U V T 89 E = U V T, 9 where U and U are M K orthonormal matrices, and are KK diagonal matrices with nonnegative values, and V and V are KK orthonormal matrices. The least-squares errors of equation 7 can be computed from and E E = I U U T E 9 = I U U T E. 9 Equation 8 for the L-step prediction error also can be solved directly using reduced QR decomposition or SVD. This gives the recursions

11 Multichannel predictive deconvolution H or Ẽ = I Q Q T Ẽ 9 Ẽ = I U U T Ẽ, 9 where Q and U are computed from equations 8 and 89, respectively. The steps of the algorithm for direct computation of the multichannel prediction errors are summarized below and is illustrated in Figure. Initialization: E = E = X, and Ẽ = X L For =,..., solve equations 8 and 8 for Q and Q or equations 89 and 9 for U and U update the forward and backward prediction errors E and E using equations 87 and 88 or equations 9 and 9 If needed, update the L-step prediction error using equation 9 or 9 For K =, corresponding to the prediction of a single channel, then Ẽ and Ẽ in equations 9 and 9 are column vectors. UMERICAL STABILITY AD COMPUTATIOAL COST The numerical stability of the QR and SVD algorithm is governed by the condition numbers see Appendix B and = E = E = max k k min k k = max k k min k k, 9 9 where k are the singular values in in equation 9, and k are the singular values in in equation 89. Both QR decomposition and SVD give stable solutions if the matrices are full-rank, and the condition numbers are finite. In the case that the matrices are rank-deficient, this can be detected by the SVD method. This indicates an unstable situation, and it should be considered to stop the recursions. An alternative is to use SVD to compute a minimum-norm solution for the prediction errors, as discussed in Appendix B. The method for directly computing the prediction errors using the matrices A, and B, and the Levinson algorithm both need to invert the matrices P and Q. These have the condition numbers and P = 97 This follows from equations 77-8, where it is seen that solving for P and Q correspond to solving two sets of normal equations. Solving a system of normal equations may result in an unstable solution Treitel and Wang, 97; Trefethen and Bau, 997. The classical procedure using Levinson algorithm also requires the computation of the autocorrelation matrices, the filter coefficient matrices and the multichannel prediction error filtering. All these steps introduce additional numerical errors. The determinant of a matrix is equal to the product of its singular values Golub and Van Loan, 99. Thus det P = k k, 99 det Q = k k. For the autocorrelation matrix R defined in equation, it is shown in Appendix A that det R = det P = det Q. From equations 99-, it follows that the condition number is R = max,k k min,k k = max,k k min,k. k This is larger than any of the condition numbers P and Q,so solving equation or 8 directly is the least stable procedure. From this discussion, we conclude that the direct computation of the prediction errors, as discussed in the previous section, is stable, provided that the matrices are full-rank. Treitel and Wang 97 showed that the normal equations for single-channel predictive deconvolution may be ill-conditioned and numerically unstable. They recommend to stabilize the equations by adding a positive constant, corresponding to %, to the diagonal of the normal equations, or to solve the normal equations using a conugate-gradient method. The proposed new algorithms have condition numbers that are much smaller than the normal equations, and they are therefore more numerically stable. In the single-channel case, all algorithms fail if P =. Then, both the forward and backward prediction errors are zero, which is unlikely to occur in practice. The classical multichannel predictive algorithm and the direct prediction-error computation algorithm require the least-squares solution of normal equations of order KK see equations, 7, 77, and 8, which may be done using the Cholesky or the conugate- Data Loop over =,... Compute: - ~ E, E, and E by SVD or QR decomposition E ~ E One-step forwardprediction error L-step forwardprediction error Q =. 98 Figure. Direct predictive deconvolution using QR or SVD.

12 H Porsani and Ursin Table. umber of arithmetic operations for one-step predictive deconvolution. Single channel Levinson deconvolution OE-STEP M Single channel Direct deconvolution M Multichannel Levinson deconvolution MKK K 8K Multichannel Direct deconvolution K M K 8 K Cholesky Multichannel Direct deconvolution QR or SVD K M K K the one-step single-channel and multichannel algorithms. Table shows the additional arithmetic operations for the L-step single-channel and multichannel algorithms. The cost of input-output operations is not included. For practical purposes, instability of the multichannel algorithms happens if identical channels being used exist. Our experience with the multichannel classical and the direct computation algorithms is that the classical algorithm fails for more than five channels. To get stability using the classical algorithm, we must increase the value of the diagonal elements of the R matrix, as in the single-channel case. Table. Additional arithmetic operations for L-step predictive deconvolution. UMERICAL EXAMPLE Single channel Levinson deconvolution L-STEP M Single channel Direct deconvolution M Multichannel Levinson deconvolution K K MK KK K Multichannel Multichannel Depth (m) Direct deconvolution Cholesky Direct deconvolution QR or SVD Offset (m) 8, V = m/s V = 7 m/s V = m/s K KM KK K K K M K Sea level Interface Interface Figure. Synthetic model with two interfaces used in the numerical example. gradient method. The classical approach is more efficient, but it is more complicated to implement as a computer code. The direct prediction-error computation algorithm via QR or SVD decomposition of the forward and backward one-step prediction-error matrices see equations 8, 8, 89, and 9 does not solve the normal equations and is more numerically stable. The computational implementation is more simple, but it is computationally more expensive. Table shows the computational cost in terms of arithmetic operations for Predictive deconvolution may be used to suppress multiple reflections, as long as they are periodic. We consider a multiple reflection with period T. A least-squares prediction filter with prediction distance L, LT, may be calculated. The number of coefficients of the L-step prediction filter,, needs to satisfy the relations LT, to ensure that the prediction and removal of the multiple are effective. These conditions are important to ensure that the memory of the prediction filter, during its convolution, will be spread along the primary reflection while the corresponding output reaches the multiple event that we want to attenuate. For practical purposes, we may use =.T and L =.9T, such that L =.T. Lima and Porsani demonstrate this using synthetic data from the D geological model shown in Figure. Only two flat interfaces are present. Interface represents the ocean floor with a small dip of %. Interface is horizontal. A seismic section formed by common midpoints CMP of traces was generated. The sample interval is ms. The frequency of the wavelet is in the interval Hz. The synthetic seismic data were generated using the software SU-CWP Cohen and Stockwell, 997. We have corrected the data for spherical divergence, before applying predictive deconvolution, using the single-channel and multichannel with three and five channels. Figure shows the results of the predictive deconvolution applied directly to a CMP gather. The prediction distance was 8 ms L =, and the length of the filter was 8 ms =88. For short offset, the multichannel deconvolution is very effective, showing the importance of the periodicity of the multiple to the method. As illustrated by the diagram in Figure, Figures 7 and 8 show results of the predictive deconvolution method applied in the common-offset domain. Figure 7a shows the input data, corresponding to a common-offset section. The prediction distance was 78 ms L = 88, and the length of the filter was 8 ms =. The result of single-channel deconvolution is shown in Figure 7b, and the results of the multichannel deconvolution using three and five channels are shown in Figure 7c and d, respectively. The multichannel

13 Multichannel predictive deconvolution H a) b) a) 8 b) 8 c) d) c) 8 d) 8 Figure. a Original CMP is presented. b Results of the singlechannel deconvolution. Results of the multichannel deconvolution using three and five channels in c and d. MO correction (multiple velocity) Sort to common offsets Multiple attenuation (L-step predictive deconvolution) Back to CMPs Inverse MO CMP stacking Figure. Flowchart for multiple attenuation in the common-offset domain. Figure 7. a Panel of common-offset traces, with normal moveout MO correction, without deconvolution. b Result of singlechannel deconvolution is shown. Results of multichannel deconvolution, using three and five channels, c and d, respectively. deconvolution is more effective along all of the panels. Figure 8 shows the stacked section after applying the deconvolution in the common-offset domain. The effectiveness of the multichannel deconvolution is evident. COCLUSIOS The multichannel one-step forward and backward prediction-error filters can be computed by Levinson recursion, and then the L-step prediction-error filter can be computed in a similar recursion. This filter must be applied to the seismic data to produce the prediction error, which should have less multiples. We described a similar recursion for the direct computation of the prediction errors by computing only the filter matrices with the highest indices. A more stable recursion for the prediction errors was proposed using reduced QR decomposition or reduced SVD and not computing any filter matrix. These algorithms are more stable than the first ones, but they also require more computer resources. The computer-program implementation of the new algorithms is much simpler than the implementation of the standard procedure. When the filter matrices are needed for application to a second data set, they can also be obtained without computing the covariance matrices, by using a second Levinson recursion.

14 H Porsani and Ursin a) b) 8 c) d) APPEDIX A MATRIX DECOMPOSITIOS The forward-prediction errors are computed from the equations = x t x ti A,i, t =,...,M. A- e,t i= For =,...,, these equations can be collected in or X X I X A, I =E E A, A, I E X A = E, A- A- Figure 8. Stacked section results, obtained after the single-channel and multichannel deconvolution being applied in the common-offset domain. Stacked results without deconvolution in a. Singlechannel deconvolution is shown in b. Results of multichannel deconvolution, using three and five channels, are shown in c and d. For single-channel predictive deconvolution, there is no need to do QR decomposition or SVD. The new method with direct recursive computation of the prediction errors requires about the same computational effort as the standard approach, but its program implementation is much simpler. The synthetic-data example showed the effectiveness of multichannel predictive deconvolution in attenuating multiple reflections, which occur periodically in the data. ACKOWLEDGMETS The authors wish to express their gratitude to Petroleum Geo-Services, Statoil, orway; the Conselho acional de Desenvolvimento Científico e Tecnológico, CPq, FIEP, PETROBRAS, and FAPESB, Brazil; and the orwegian Research Council through the ROSE proect for financial support. The authors also are thankful to Sven Treitel, Tamas emeth, Paul R. Gutowski, Michelângelo Gomes da Silva, and the reviewers for their criticism and suggestions that helped to clarify the main ideas of the paper. where X,defined in equation, has shifted data blocks. We note that A is a unit lower triangular matrix, and we shall prove that E is block orthogonal. This follows directly from E T E = A T R A = P = diagp,...,p,p. A- From the extended normal equation it follows that R A is upper block diagonal with P on the diagonal. Premultiplying with A T, which is unit upper triangular, results in a matrix which is upper block diagonal and symmetric. Therefore, it must be block diagonal with the symmetric elements P on the diagonal. The backward prediction errors are computed from the equations e,t = x t x ti B,i, t =,,...,M. A- i= For =,,..., M these equations can be collected in or X B, B, X I B, XI I =E E E X B = E. A- A-7

15 Multichannel predictive deconvolution H Here, the matrix B is unit upper triangular, and E is block orthogonal. This follows from E T E = B T R B = Q = diagq,q,..., Q. A-8 From the extended normal equation 8, the matrix R B is lower block triangular with Q on the diagonal. Premultiplying this with B T, which is unit lower triangular results in a matrix which is lower block triangular and symmetric. Hence, it must be block diagonal with the symmetric matrices Q on the diagonal. We have ust shown that block-shifted data matrices can be expressed by X = E A = E B, A-9 where A is unit lower diagonal and B is unit upper diagonal. The matrices E and E are block orthogonal. From equations A- and A-8 also follows the decompositions and R = A T P A = B T Q B, R = A P A T = B Q B T, det R = det Q = det P. = = A- A- The covariance matrix R is positive definite if the matrices Q, =,..., or the matrices P, =,..., all are positive definite. We also have det Q = det P. A- The last expression in the middle line in equation B- is zero verify by using equation B-. This means that the estimate Axˆ is orthogonal to the error e = b Axˆ = I AA T A A T b. B- This expression may be simplified by using a reduced QR decomposition of the data matrix A = QR, B-7 where Q T Q = I, and R is upper triangular. Q is an M matrix, R is, and R exists since A is full rank. Using the orthogonality of Q results in e = I QQ T b. Alternatively, we may express A by a reduced SVD A = UV T, B-8 B-9 where U is M, V is, and both are orthonormal, V T V = I, U T U = I. The diagonal matrix contains the singular values in descending magnitude = diag,...,, B- where. With the SVD in equation B-9, the error in equation B- can be written e = I UU T b. B- The numerical stability of the least-squares problem is determined by the condition number APPEDIX B A = max k k min k k. B- PROPERTIES OF LIEAR LEAST-SQUARES ESTIMATIO We want to estimate the vector x in the linear overdetermined problem by minimizing the function, Ax = b, = Ax b. B- B- Here, A is an M matrix of full rank, with M. Then the least-squares solution is xˆ = A T A A T b. The minimum value of the quadratic function is min = b T xˆ TA T b Axˆ Using equation B- in B- gives = b T b Axˆ xˆ TA T b Axˆ B- = b T b Axˆ. B- It follows that A T A = A. B- Thus, the solution of the normal equations is much less stable than using SVD or QR decomposition. In fact, the solution of the normal equations is unstable, while both QR decomposition via Householder triangularization and SVD provide stable solutions Trefethen and Bau, 997. SVD has the advantage that near singular cases can be detected. When the problem is rank-deficient, so that ranka = P,P, we can still use the reduced SVD in equation B-9, but now = diag,..., P,,...,, B- where P. In this case there is no unique solution to the least-squares problem, and equation B- could not be defined. Using SVD, we may choose the minimum-norm solution where xˆ = V U T b, = diag,, P,,,. B- B- min = b T I AA T A A T b. B- In this case, the error is

16 H Porsani and Ursin e = I U U T b, B-7 with U consisting of the P first columns of U. A detailed discussion on the least-squares problem can be found in Börck 99 and also in Trefethen and Bau 997. a = b = c = e e a, ẽ a, b, b, c, c, LIST OF SYMBOLS single-channel one-step forward prediction-error M column vector single-channel one-step backward prediction-error M column vector single-channel L-step forward prediction-error M column vector single-channel forward prediction-error filter of order single-channel backward prediction-error filter of order single-channel L-step forward prediction-error filter of order K number of input channels K number of channels being predicted number of filter-coefficient matrices L prediction distance K data-row vector x t = x tk k =,..., K channel number t =,..., M time-sample number X = x x x M x L X L = x M X M K data matrix M L K reduced data matrix for L-step forward prediction-error computation K K X X X M K = extended data matrix KK matrix element of the one-step A, forward prediction filter of length I A, KK matrix representing the A = forward prediction-error filter of length A, E = e xˆ t e,t, e, e,m one-step forward prediction K row vector one-step forward prediction K row vector one-step forward prediction-error M K matrix Tr trace of a matrix the sum of the diagonal elements P = min TrE T E KK covariance matrix of the minimized forward-prediction error B = E = e B, B, B, I e,t, e, e,m KK matrix element of the one-step backward prediction filter of length KK matrix representing the backward prediction-error filter of length one-step backward prediction-error K row vector one-step backward prediction-error M K matrix Q = min TrE T E KK covariance matrix of the minimized backward prediction errors Ẽ = ẽ,tl ẽ,l ẽ,m C, Ĩ C, C, L-step forward-prediction error K row vector L-step forward-prediction error M L K matrix KK matrix element of the L-step prediction-error filter K K K matrixrepresenting the L-step prediction-error filter of length forpredictionofk channelsĩ isthe identitymatrixoforderk R L KK elements of the covariance matrix for the L-step prediction R KK elements of the covariance matrix R K K = covariance matrix P mintrẽ T Ẽ K covariance matrix of the K minimized L-step forward prediction error

17 Multichannel predictive deconvolution H7 p single-channel normalized one-step forward-prediction error M column vector q single-channel normalized one-step backward prediction-error M column vector Q,R orthogonal and upper triangular matrices associated with the QR decomposition of the one-step forward errors Q,R orthogonal and upper triangular matrices associated with the QR decomposition of the one-step backward errors U,,V orthogonal matrices associated with the SVD of the one-step forward errors U,,V orthogonal matrices associated with the SVD of the one-step backward errors J reverse identity matrix of order X data matrix for the single-channel one-step prediction-error filter residual of the normal equation single-channel one-step prediction error residual of the normal equation single-channel L-step prediction error residual of the normal equation multichannel one-step prediction error residual of the normal equation multichannel L-step prediction error REFERECES Berkhout, A. J., 977, Least-squares inverse filtering and wavelet deconvolution: Geophysics,, 9 8. Börck, A., 99, umerical methods for least square problems: Society for Industrial and Applied Mathematics. Burg, J., 97, Maximum entropy spectral analysis: Ph.D. thesis, Stanford University. Cassano, E., and F. Rocca, 97, Multichannel linear filters for optimal reection of multiple reflections: Geophysics, 8,. Claerbout, J. F., 97, Fundamentals of geophysical data processing: McGraw-Hill Book Co. Cohen, J., and J. J. W. Stockwell, 997, Cwp-su: Seismic unix release : Center of Wave Phenomena, Colorado School of Mines. Dragoset, W. H., and Z. Jericevic, 998, Some remarks on surface multiple attenuation: Geophysics,, Galbraith, J.., and R. A. Wiggins, 98, Characteristics of optimum multichannel stacking filters: Geophysics,, 8. Golub, G. H., and C. F. Van Loan, 99, Matrix computations, rd ed.: Johns Hopkins University Press. Koehler, F., and M. T. Taner, 98, The use of the conugate-gradient algorithm in the computation of predictive deconvolution operators: Geophysics,, Leinbach, J., 99, Wiener spiking deconvolution and minimum-phase wavelets: A tutorial: The Leading Edge,, Levinson,., 97, The wiener rms root mean square error criterion in filter design and prediction: Journal of Mathematical Physics,, 78. Lima, A., and M. J. Porsani,, Predictive deconvolution of peg-leg multiple reflections using multichannel wiener-levinson filtering: Brazilian Journal Geophysics, 9,. Lines, L. R., and T. J. Ulrych, 977, The old and the new in seismic deconvolution and wavelet estimation: Geophysical Prospecting,,. Lokshtanov, D., 998, Multiple suppression by data-consistent deconvolution: 8th Annual International Meeting, SEG, Expanded Abstracts, 8. Marple, L., 98, A new autoregressive spectrum analysis algorithm: Institute of Electrical and Electronics Engineers, Inc. Transactions on Acoustics, Speech, and Signal Processing, ASSP-8,. Morf, M., B. Dickinson, T. Kailath, and A. Vieira, 977, Recursive solution of covariance equations for linear prediction: Institute of Electrical and Electronics Engineers, Inc. Transactions on Acoustics, Speech, and Signal Processing, ASSP-, 9. Osman, O. M., and E. A. Robinson, eds., 99, Seismic source signature estimation and measurement: SEG. Peacock, K. L., and S. Treitel, 99, Predictive deconvolution Theory and practice: Geophysics,, 9. Porsani, M. J., 99, Fast algorithms to design discrete Wiener filters in lag and length coordinates: Geophysics,, Porsani, M. J., and T. J. Ulrych, 99, Levinson-type extensions for nontoeplitz systems: Institute of Electrical and Electronics Engineers, Inc. Transactions on Signal Processing, 9, 7. Porsani, M. J., and B. Ursin,, Mixed-phase deconvolution and wavelet estimation: The Leading Edge, 9, Robinson, E. A., 97, Predictive decomposition of seismic traces: Geophysics,, , 977, Multichannel time series analysis with digital computer programs: Holden-Day. Robinson, E. A., and O. M. Osman, eds., 99, Deconvolution : SEG. Robinson, E. A., and S. Treitel, 98, Geophysical signal analysis: Prentice- Hall, Inc. Rosenberger, A., H. Meyer, and B. Buttkus, 999, A multichannel approach to long-period multiple prediction and attenuation: Geophysical Prospecting, 7, 9 9. Taner, M. T., 98, Long-period sea-floor multiples and their suppression: Geophysical Prospecting, 8, 8. Taner, M. T., R. F. O Doherty, and F. Koehler, 99, Long-period multiples suppression by predictive deconvolution in the x-t domain: Geophysical Prospecting,, 8. Trefethen, L.., and D. Bau, 997, umerical linear algebra: Society for Industrial and Applied Mathematics. Treitel, S., 97, Principles of digital multichannel filtering: Geophysics,, Treitel, S., and E. A. Robinson, 9, The stability of digital filters: Institute of Electrical and Electronics Engineers, Inc. Transactions on Geoscience Electronics, GE-, 8., 9, The design of high-resolution digital filters: Institute of Electrical and Electronics Engineers, Inc. Transactions on Geoscience Electronics, GE-, 8. Treitel, S., and R. J. Wang, 97, The determination of digital wiener filters from an ill-conditioned system of normal equations: Geophysical Prospecting,, 7 7. Ulrych, T. J., D. R. Velis, and M. D. Sacchi, 99, Wavelet estimation revisited: The Leading Edge,, 9. Ursin, B., and M. J. Porsani,, Estimation of an optimal mixed-phase inverse filter: Geophysical Prospecting, 8, 7. Wiggins, R. A., and E. A. Robinson, 9, Recursive solution to the multichannel filtering problem: Journal of Geophysical Research, 7, Yilmaz, Ö., 987, Seismic data processing: SEG.

Minimum entropy deconvolution with frequency-domain constraints

Minimum entropy deconvolution with frequency-domain constraints GEOPHYSICS, VOL. 59, NO. 6 (JUNE 1994); P. 938-945, 9 FIGS., 1 TABLE. Minimum entropy deconvolution with frequency-domain constraints Mauricio D. Sacchi*, Danilo R. Velis*, and Alberto H. Cominguez ABSTRACT

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Regularizing seismic inverse problems by model reparameterization using plane-wave construction

Regularizing seismic inverse problems by model reparameterization using plane-wave construction GEOPHYSICS, VOL. 71, NO. 5 SEPTEMBER-OCTOBER 2006 ; P. A43 A47, 6 FIGS. 10.1190/1.2335609 Regularizing seismic inverse problems by model reparameterization using plane-wave construction Sergey Fomel 1

More information

Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta

Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta SUMMARY We propose an algorithm to compute time and space variant prediction

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Adaptive multiple subtraction using regularized nonstationary regression

Adaptive multiple subtraction using regularized nonstationary regression GEOPHSICS, VOL. 74, NO. 1 JANUAR-FEBRUAR 29 ; P. V25 V33, 17 FIGS. 1.119/1.343447 Adaptive multiple subtraction using regularized nonstationary regression Sergey Fomel 1 ABSTRACT Stationary regression

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Correspondence between the low- and high-frequency limits for anisotropic parameters in a layered medium

Correspondence between the low- and high-frequency limits for anisotropic parameters in a layered medium GEOPHYSICS VOL. 74 NO. MARCH-APRIL 009 ; P. WA5 WA33 7 FIGS. 10.1190/1.3075143 Correspondence between the low- and high-frequency limits for anisotropic parameters in a layered medium Mercia Betania Costa

More information

On non-stationary convolution and inverse convolution

On non-stationary convolution and inverse convolution Stanford Exploration Project, Report 102, October 25, 1999, pages 1 137 On non-stationary convolution and inverse convolution James Rickett 1 keywords: helix, linear filtering, non-stationary deconvolution

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1 DIGITAL SPECTRAL ANALYSIS WITH APPLICATIONS S.LAWRENCE MARPLE, JR. SUMMARY This new book provides a broad perspective of spectral estimation techniques and their implementation. It concerned with spectral

More information

Robust one-step (deconvolution + integration) seismic inversion in the frequency domain Ivan Priezzhev* and Aaron Scollard, Schlumberger

Robust one-step (deconvolution + integration) seismic inversion in the frequency domain Ivan Priezzhev* and Aaron Scollard, Schlumberger Robust one-step (deconvolution + integration) seismic inversion in the frequency domain Ivan Priezzhev and Aaron Scollard, Schlumberger Summary Seismic inversion requires two main operations relative to

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Implicit 3-D depth migration by wavefield extrapolation with helical boundary conditions

Implicit 3-D depth migration by wavefield extrapolation with helical boundary conditions Stanford Exploration Project, Report 97, July 8, 1998, pages 1 13 Implicit 3-D depth migration by wavefield extrapolation with helical boundary conditions James Rickett, Jon Claerbout, and Sergey Fomel

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Controlled source interferometry with noisy data Jürg Hunziker, Joost van der Neut, Evert Slob and Kees Wapenaar, Delft University of Technology

Controlled source interferometry with noisy data Jürg Hunziker, Joost van der Neut, Evert Slob and Kees Wapenaar, Delft University of Technology Jürg Hunziker, Joost van der Neut, Evert Slob and Kees Wapenaar, Delft University of Technology SUMMARY We investigate the effects of noise on controlled-source interferometry by multi-dimensional deconvolution

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Local discontinuity measures for 3-D seismic data

Local discontinuity measures for 3-D seismic data GEOPHYSICS, VOL. 67, NO. 6 (NOVEMBER-DECEMBER 2002); P. 1933 1945, 10 FIGS. 10.1190/1.1527094 Local discontinuity measures for 3-D seismic data Israel Cohen and Ronald R. Coifman ABSTRACT In this work,

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

SUMMARY INTRODUCTION LOCALIZED PHASE ESTIMATION

SUMMARY INTRODUCTION LOCALIZED PHASE ESTIMATION Local similarity with the envelope as a seismic phase detector Sergey Fomel, The University of Texas at Austin, and Mirko van der Baan, University of Alberta SUMMARY We propose a new seismic attribute,

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

2-D FIR WIENER FILTER REALIZATIONS USING ORTHOGONAL LATTICE STRUCTURES

2-D FIR WIENER FILTER REALIZATIONS USING ORTHOGONAL LATTICE STRUCTURES -D FIR WIENER FILTER REALIZATIONS USING ORTHOGONAL LATTICE STRUCTURES by Ahmet H. Kayran and Ender Ekşioğlu Department of Electrical Engineering Istanbul Technical University aslak, Istanbul, Turkey 866

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Deconvolution imaging condition for reverse-time migration

Deconvolution imaging condition for reverse-time migration Stanford Exploration Project, Report 112, November 11, 2002, pages 83 96 Deconvolution imaging condition for reverse-time migration Alejandro A. Valenciano and Biondo Biondi 1 ABSTRACT The reverse-time

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Residual statics analysis by LSQR

Residual statics analysis by LSQR Residual statics analysis by LSQR Residual statics analysis by LSQR Yan Yan, Zhengsheng Yao, Gary F. Margrave and R. James Brown ABSRAC Residual statics corrections can be formulated as a linear inverse

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Velocity analysis using AB semblance a

Velocity analysis using AB semblance a Velocity analysis using AB semblance a a Published in Geophysical Prospecting, v. 57, 311-321 (2009) Sergey Fomel ABSTRACT I derive and analyze an explicit formula for a generalized semblance attribute,

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

On the Use of A Priori Knowledge in Adaptive Inverse Control

On the Use of A Priori Knowledge in Adaptive Inverse Control 54 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 47, NO 1, JANUARY 2000 On the Use of A Priori Knowledge in Adaptive Inverse Control August Kaelin, Member,

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

SUMMARY ANGLE DECOMPOSITION INTRODUCTION. A conventional cross-correlation imaging condition for wave-equation migration is (Claerbout, 1985)

SUMMARY ANGLE DECOMPOSITION INTRODUCTION. A conventional cross-correlation imaging condition for wave-equation migration is (Claerbout, 1985) Comparison of angle decomposition methods for wave-equation migration Natalya Patrikeeva and Paul Sava, Center for Wave Phenomena, Colorado School of Mines SUMMARY Angle domain common image gathers offer

More information

HISTOGRAM MATCHING SEISMIC WAVELET PHASE ESTIMATION

HISTOGRAM MATCHING SEISMIC WAVELET PHASE ESTIMATION HISTOGRAM MATCHING SEISMIC WAVELET PHASE ESTIMATION A Thesis Presented to the Faculty of the Department of Earth and Atmospheric Sciences University of Houston In Partial Fulfillment of the Requirements

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Short Note. Plane wave prediction in 3-D. Sergey Fomel 1 INTRODUCTION

Short Note. Plane wave prediction in 3-D. Sergey Fomel 1 INTRODUCTION Stanford Exploration Project, Report SERGEY, November 9, 2000, pages 291?? Short Note Plane wave prediction in 3-D Sergey Fomel 1 INTRODUCTION The theory of plane-wave prediction in three dimensions is

More information

A New Multiple Order Multichannel Fast QRD Algorithm and its Application to Non-linear System Identification

A New Multiple Order Multichannel Fast QRD Algorithm and its Application to Non-linear System Identification XXI SIMPÓSIO BRASILEIRO DE TELECOMUICAÇÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA A ew Multiple Order Multichannel Fast QRD Algorithm and its Application to on-linear System Identification António L L

More information

A Petroleum Geologist's Guide to Seismic Reflection

A Petroleum Geologist's Guide to Seismic Reflection A Petroleum Geologist's Guide to Seismic Reflection William Ashcroft WILEY-BLACKWELL A John Wiley & Sons, Ltd., Publication Contents Preface Acknowledgements xi xiii Part I Basic topics and 2D interpretation

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Structure-constrained relative acoustic impedance using stratigraphic coordinates a

Structure-constrained relative acoustic impedance using stratigraphic coordinates a Structure-constrained relative acoustic impedance using stratigraphic coordinates a a Published in Geophysics, 80, no. 3, A63-A67 (2015) Parvaneh Karimi ABSTRACT Acoustic impedance inversion involves conversion

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

= (G T G) 1 G T d. m L2

= (G T G) 1 G T d. m L2 The importance of the Vp/Vs ratio in determining the error propagation and the resolution in linear AVA inversion M. Aleardi, A. Mazzotti Earth Sciences Department, University of Pisa, Italy Introduction.

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Numerically Stable Cointegration Analysis

Numerically Stable Cointegration Analysis Numerically Stable Cointegration Analysis Jurgen A. Doornik Nuffield College, University of Oxford, Oxford OX1 1NF R.J. O Brien Department of Economics University of Southampton November 3, 2001 Abstract

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Acoustic MIMO Signal Processing

Acoustic MIMO Signal Processing Yiteng Huang Jacob Benesty Jingdong Chen Acoustic MIMO Signal Processing With 71 Figures Ö Springer Contents 1 Introduction 1 1.1 Acoustic MIMO Signal Processing 1 1.2 Organization of the Book 4 Part I

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1 GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS Mitsuru Kawamoto,2 and Yuiro Inouye. Dept. of Electronic and Control Systems Engineering, Shimane University,

More information

Residual Statics using CSP gathers

Residual Statics using CSP gathers Residual Statics using CSP gathers Xinxiang Li and John C. Bancroft ABSTRACT All the conventional methods for residual statics analysis require normal moveout (NMO) correction applied on the seismic data.

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Matrix Algebra Review

Matrix Algebra Review APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in

More information

DYNAMIC AND COMPROMISE FACTOR ANALYSIS

DYNAMIC AND COMPROMISE FACTOR ANALYSIS DYNAMIC AND COMPROMISE FACTOR ANALYSIS Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu Many parts are joint work with Gy. Michaletzky, Loránd Eötvös University and G. Tusnády,

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete

More information

Manning & Schuetze, FSNLP (c) 1999,2000

Manning & Schuetze, FSNLP (c) 1999,2000 558 15 Topics in Information Retrieval (15.10) y 4 3 2 1 0 0 1 2 3 4 5 6 7 8 Figure 15.7 An example of linear regression. The line y = 0.25x + 1 is the best least-squares fit for the four points (1,1),

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Linear Methods in Data Mining

Linear Methods in Data Mining Why Methods? linear methods are well understood, simple and elegant; algorithms based on linear methods are widespread: data mining, computer vision, graphics, pattern recognition; excellent general software

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

The Analysis of Data Sequences in the Time and Frequency Domains 1

The Analysis of Data Sequences in the Time and Frequency Domains 1 The Analysis of Data Sequences in the Time and Frequency Domains 1 D. E. Smylie 1 c D. E. Smylie, 2004. Contents 1 Time Domain Data Sequences 1 1.1 Discrete, Equispaced Data Sequences....... 1 1.2 Power

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

Elements of 3D Seismology Second Edition

Elements of 3D Seismology Second Edition Elements of 3D Seismology Second Edition Copyright c 1993-2003 All rights reserved Christopher L. Liner Department of Geosciences University of Tulsa August 14, 2003 For David and Samantha And to the memory

More information

Manning & Schuetze, FSNLP, (c)

Manning & Schuetze, FSNLP, (c) page 554 554 15 Topics in Information Retrieval co-occurrence Latent Semantic Indexing Term 1 Term 2 Term 3 Term 4 Query user interface Document 1 user interface HCI interaction Document 2 HCI interaction

More information