Optimum Sampling Vectors for Wiener Filter Noise Reduction

Size: px
Start display at page:

Download "Optimum Sampling Vectors for Wiener Filter Noise Reduction"

Transcription

1 58 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Optimum Sampling Vectors for Wiener Filter Noise Reduction Yukihiko Yamashita, Member, IEEE Absact Sampling is a very important and basic technique for signal processing. In the case that noise is added to a signal in the sampling process, we may use a reconsuction and noise reduction filter such as the Wiener filter. The Wiener filter provides a restored signal of which mean square error is minimized. However, the mean square error by the Wiener filter depends on the sampling vectors. We may have a freedom to consuct sampling vectors. In this paper, we provide optimum sampling vectors under the condition that the Wiener filter is used for noise reduction for two cases wherein the noise is added before/after sampling. The sampling vectors provided in this paper may not be practical since they are very complicated. However, the minimum mean square error, which we provide theoretically, can be used for evaluating other sampling vectors. We provide all proofs of the theorems and lemmas. Furthermore, by experimental results, we show their advantages. Index Terms Relative Karhunen Loève ansform, sampling, Wiener filter. I. INTRODUCTION SAMPLING is a very important and basic technique in order to input a signal into a computer for signal processing [1] [7]. Furthermore, sampling theories can be applied for data compression and pattern recognition by using them for feature exaction. In this paper, we consider the case wherein the sampled data are obtained by the inner product between a signal and the sampling vectors. Many sampling processes can be expressed by this model. When the dimension of the original signal space is more than the number of sampling points, we may not reconsuct the original signal from the sampled data. Furthermore, the sampled data may be degraded with noise. In those cases, we often use a reconsuction and noise reduction filter such as the Wiener filter (WF) [11], [17]. The WF provides a restored signal of which mean square error is minimized. However, the mean square error by the WF depends on the sampling vectors. We may have a freedom to consuct sampling vectors. It brings us a problem wherein sampling vectors are the optimum. When the signal contains no noise, it is clear that the optimum sampling vectors are given by the Karhunen Loève ansform (KLT) [9] [12]. When the signal contains noise, we provided sampling vectors Manuscript received November 30, 1999; revised September 28, The associate editor coordinating the review of this paper and approving it for publication was Prof. Gregori Vazquez. The author is with the Department of International Development Engineering, Tokyo Institute of Technology, Tokyo, Japan ( yamasita@ide.titech.ac.jp). Publisher Item Identifier S X(02) that minimize the mean square error between the original signal and the signal restored by the WF under the condition that the noise is uniform and uncorrelated [8]. In this paper, we provide optimum sampling vectors for the following two cases without such a resiction. The one case is that the noise is added before sampling. In this case, the optimum sampling vectors are given by using the relative KLT (RKLT) [13]. The RKLT is an extension of the KLT for the case that a noise is added to the signal. [Hua and Liu extended the RKLT to the general KLT (GKLT) [14].] The other case is that the noise is added after sampling. The results in [8] are given as the corollaries of theorems in this paper. The sampling vectors provided in this paper may not be practical since they are very complicated. However, the minimum mean square error, which we provide theoretically, can be used for evaluating other sampling vectors. By comparing with the theoretical minimum value, we can know whether the sampling vectors can be improved or not in the sense of mean square error. Usually, the meaning of sampling is to obtain discrete data from a continuous signal. Although our theorems are for discrete discrete sampling, they can be used for the following cases. When the support of a signal is bounded or we can assume such a case approximately, by making the signal vector with the coefficients of the Fourier expansion of the signal with a sufficiently large finite dimension, we can sufficiently approximate the continuous case since usually, the high-frequency components of the signal are small, and the results of the theorems in this paper mainly depend on the large eigenvalues of the correlation maix of signals. Furthermore, the combination of simple sampling with high sampling rate and complicated subsampling by digital signal processing can provide better sampled data than direct sampling. The subsampling is discrete discrete sampling. In the field of principal component analysis (PCA) neural networks, Diamantaras [15], [16] has provided an equivalent theorem of Theorem 2 in this paper. However, the proof is not complete since the method of Lagrange s multiplier not with inequalities but only with equations were used for inequality conditions. In order to prove the theorem, the convexity of the evaluation function should be considered in detail. Therefore, we provide the sict proof of the theorem in this paper. Since, in [8], neither experiments for the case that the noise is added before sampling nor the proofs of theorems and lemmas were provided, we demonsate the advantages of the optimum sampling vectors for both cases against the periodic sampling by experimental results, and we show all proofs in this paper X/02$ IEEE

2 YAMASHITA: OPTIMUM SAMPLING VECTORS FOR WIENER FILTER NOISE REDUCTION 59 A. Mathematical Preliminaries The following notations and terminologies are used in this paper. Let be an -dimensional Euclidean space. Let and be the inner product and the norm in, respectively. Let be the ensemble average for a stochastic signal in. Let and be the range and the null space of a maix, respectively. Let, rank, and be the anspose, the rank, and the ace of a maix, respectively. For any maix, there exists a unique maix [18], [19] such that,,, and. The maix is called the Moore Penrose generalized inverse of. An -maix is said to be a non-negative definite maix if and only if for every.an -maix is said to be a positive definite maix if and only if for every. They are denoted by and, respectively. For any symmeic non-negative definite maix, there exists a unique symmeic non-negative definite maix such that. We explain about the RKLT. The KLT provides the best approximation for a stochastic signal under the condition that its rank is fixed. However, when noise is added to the signal, it is not optimum in general. Let be a ansform maix. Let be a noise. Since an approximation of is given as with, the RKLT of rank (or not greater that ) is defined as a maix that minimizes under the condition that rank (or rank ). We assume that and are uncorrelated. We provide a solution of the RKLT not greater than in the following Proposition 1, whose form is slightly different from the original one in [13]. That form is simpler than the original form for numerical calculation. We show the proof of Proposition 1 in the Appendix. Let and be correlation maices with respect to signal and noise ensembles that are defined as (1) II. OPTIMUM SAMPLING Let be a set of sampling vectors. We assume that a sampled value is given as the inner product between a signal and a sampling vector. Let be the th element of a vector. Let be the sampled data. Let be a restoration maix. In this paper, we consider the criterion minimizing It is an optimum restoration problem to minimize with respect to. On the other hand, it is an optimum sampling problem to minimize with respect to. Let be a natural basis in. It follows that Then, the sampling maix is given as When we neglect the effect of noise, the sampled vector given as However, it often happens that the sampled vector is degraded with noise. We discuss two cases for the noise. One case is that the noise is added before sampling and the other is after. When, it is called subsampling. When, it is called oversampling. They are different problems. When noise is added after sampling, we provide unified theorems for both cases. When noise is added before sampling, it is no use to oversample a signal. A. Optimum Sampling when Noise Is Added Before Sampling In the case that noise is added before sampling, the sampled data is given as (6) (7) (8) is (9) (10) respectively. We define a maix as (2) (3) (4) This is equivalent to In this case, (6) is given by (11) (12) Let and be eigenvalues and a set of corresponding eigenvectors of, respectively. We chose as an orthonormal basis Proposition 1: An RKLT of rank not greater than is given as (5) We assume that and are uncorrelated. When we fix and minimize (12) with respect to, we obtain a WF from this criterion. A WF is given as (13) However, we do not use (13) to solve this problem for the proof. We can solve this problem by using Proposition 1. We use the same notation as Proposition 1.

3 60 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Theorem 1: When noise is added before sampling, optimum sampling vectors are given as Theorem 2: When noise is added after sampling, optimum sampling vectors are given as A restoration maix is given as (14) (22) (15) where The minimum value of is given as (23) (16) From this criterion, we can know that in (15) is a WF when is given by (14). Note that (14) is a sufficient condition. For example, are also the optimum sampling vectors with constants. It is no use to set since rank. It means that it is impossible to reduce noise by oversampling when noise is added before sampling. B. Optimum Sampling when Noise Is Added After Sampling When noise is added after sampling, the sampled data is given as with the maximum integer subject to The minimum value of is given as (24) (25) (26) (27) This is equivalent to (17) (18) In order to decide the, we have to scan and find the maximum subject to (25) and (26) since also depends on. From (22), we have Criterion (6) is given as (19) In this model, since the larger the norms of are, the smaller the effect of noise is, we have to normalize them. Here, we normalize the total power of sampling vectors, that is (20) with. In this subsection, we assume that and. When we fix, the solution of minimizing (19) with respect to is given by a WF [11], [17]. In this case, it is described as (21) Then, is minimized for subject to (8) and (20). Let and be eigenvalues and a set of corresponding eigenvectors of. Let and be those of. We choose and as orthonormal bases. Let be the smaller value of or. In this case, we have the following theorem for optimum sampling vectors. (28) Then, since (24) yields that corresponds to, we can consider that a component of which signal variance is large has to be ansformed to a component of which noise variance is small for optimum sampling. In the field of PCA neural networks, Diamantaras [15], [16] has provided an equivalent theorem to Theorem 2. However, the proof is not complete since the method of Lagrange s multiplier not with inequalities but only with equations were used for inequality conditions. In order to prove the theorem, the convexity of the evaluation function should be considered in detail. Furthermore, the proof of PCA inoduced in [15] is not complete, as Ogawa pointed out [12]. It uses the greedy method, which does not guarantee the global minimization. Let be a unit maix. When the noise is uniform and uncorrelated, that is,, since for all and every orthonormal basis in is a set of eigenvectors of, we have the following corollary. Corollary 1: When, optimum sampling vectors are given as (29)

4 YAMASHITA: OPTIMUM SAMPLING VECTORS FOR WIENER FILTER NOISE REDUCTION 61 Fig. 1. Mean square error when noise is added before sampling. where Theorem 3: When,, optimum sampling vectors are given as (30) with any orthonormal system and the maximum integer under the condition that the content of the square root in the (30) is not negative. The minimum value of is given as (31) We consider the case wherein and.we can provide the solution when the resiction is not (20) but that all are the same: Let be the Walsh basis in.wehave and the th element of is 1 or. (32) (33) (34) with an integer. and the minimum value of are the same as those in Corollary 1. III. EXPERIMENTAL RESULTS We compare the mean square errors between the original and the restored signals by the optimum and by the periodic samplings. We set the dimension of the original space to and the number of sampling vectors to,2,4,8,16,32, 64, 128, and 256. This experiment includes both subsamplings and oversamplings. When noise is added before sampling, we are resicted to for the optimum sampling. In the case of the periodic sampling and, the data are sampled several times at the same point. However, the sampled values at the same point are different in general when noise is added after sampling. We assume that for a real number, the correlation maix of signal is given as (35)

5 62 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Fig. 2. Optimum sampling and restoration vectors when noise is added before sampling ( =0:95, =0:2). Fig. 3. Mean square error when noise is added after sampling. where is the -element of. For a real number, is the largest integer not less than. The periodic sampling vectors are given as (36) (else) when,or (37) (else) when. The optimum sampling vectors are provided by Theorem 1 when noise is added before sampling and by Theorem 3 when noise is added after sampling. In Theorem 3, we set as. Then, the norm of sampling vectors is fixed to one. Fig. 1 illusates the mean square error versus the number of sampling vectors when noise is added before sampling. From Fig. 1, we can see the advantage of the optimum sampling when. When the correlations of signals are large, the advantage is also large. Fig. 2 illusates the sampling vectors and the corresponding restoration vectors when noise is added before sampling. We can see the vectors are something like a sinusoidal functions.

6 YAMASHITA: OPTIMUM SAMPLING VECTORS FOR WIENER FILTER NOISE REDUCTION 63 Fig. 4. Dimension K of the subspace spanned by sampling vectors when noise is added after sampling. Fig. 5. Optimum sampling and restoration vectors when noise is added after sampling. ( =0:95, =0:2). Fig. 3 illusates the mean square error versus the number of sampling vectors when noise is added after sampling. From Fig. 2, we can also see the advantage of the optimum sampling. Fig. 4 illusates the dimension of subspace spanned by versus when noise is added after sampling. For the optimum sampling, this value coincides with in the Theorems 2 and 3. By the periodic sampling, this value coincides with. The maximum dimension of the subspace spanned by vectors is. From Fig. 4, we can see that there exists the case that. The reason is that in the case, we can reduce the mean square error caused by noise when the dimension is decreased. When noise is added before sampling, this dimension is equal to in order to minimize the mean square error. Fig. 5 illusates the sampling vectors and the corresponding restoration vectors when noise is added after sampling. We can see the vectors are very complicated. IV. CONCLUSION We have explained the RKLT. By using the RKLT, we provided an optimum sampling vector that minimizes the mean square error between the original signal and the restored signal

7 64 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 when noise is added before sampling. We also provided optimum sampling vectors when noise is added after sampling. By experimental results, we showed their advantages. Since the theorems we provided are for discrete discrete sampling, we should extend them to continuous-discrete sampling. APPENDIX PROOFS A. Proof of Proposition 1 The original form of the RKLT is given as follows. We define a maix as (A.1) Let be a set of vectors of a singular value decomposition (SVD) of such that (A.2) where, and both and are orthonormal bases in. An RKLT of rank not greater than is given as (A.3) Since is a set of vectors of a SVD of,wehave (A.4) (A.5) Equations (A.4) and (A.5) yield that (A.6) Equation (A.6) yields that is the eigenvector corresponding to the eigenvalue of. Equations (A.3) and (A.5) yield that Lemma 1: For in (8), we have (A.8) (A.9) Lemma 2: When is a Wiener filter, in (19) is given as (A.10) Lemma 3: Let be an maix such that. Under this condition, if minimizes in (A.10), we have with a Lagrange s multiplier. Lemma 4: For symmeic maices and,if and is a symmeic maix, we have (A.11) (A.12) (A.13) From Lemma 1, we consider minimizing among maices such that. From Lemmas 2 and 3, its solution has to satisfy (A.11). By multiplying to (A.11) from the right-hand side, since is a symmeic maix, Lemma 4 yields that and are commutative. Then, we can assume that eigenvectors of are given by without loss of generality. Since is a basis, we can expand as (A.14) with a set of vectors. Since is a set of eigenvectors of and an orthonormal basis, we have (A.15) when. From (A.15), we can assume that (A.7) with an orthonormal basis.wehave (A.16) and a set of real numbers B. Proof of Theorem 1 Since rank, Proposition 1 yields that the criterion is minimum if and only if is an RKLT not greater than. The minimum value of is easily obtained from [13]. This completes the proof. C. Proof of Theorem 2 For the proof of Theorem 2, we provide the following lemmas. Their proofs are also given in this Appendix. We define real numbers as (A.17) (A.18) Since is a positive definite maix,. Equations (A.10), (A.14), and (A.16) (A.18) yield that (A.19)

8 YAMASHITA: OPTIMUM SAMPLING VECTORS FOR WIENER FILTER NOISE REDUCTION 65 and Lemma 7: Let (, )be integers such that if, and (A.29) The problem that we have to solve is minimizing under that condition that (A.20) in (A.19) (A.21) Then, minimizes under the conditions (A.18) and (A.28) if (A.30) From Lemma 6, (A.28) holds. This yields that the numerator of the first term in (A.27) is minimum if and only if (A.30) holds. On the other hand, since, (A.18) and (A.28) yield that the denominator of the first term in (A.27) is maximum if (A.31) with respect to and, where are given in (A.18). First, we fix and minimize in (A.19) with respect to. We define a functional (A.22) From the two results and (A.18), when is minimum, we have (A.32) Then, (A.16), (A.25), and (A.26) yield that (A.33) with a Lagrange s multiplier. If minimize,wehave Equation (A.23) yields that or (A.23) (A.24) We return to the problem where are not fixed. We then have the following lemmas. Lemma 5: If and minimize,we can assume without loss of generality that there exists an integer, and we have (A.34) (A.35) for every. Let be the maximum integer under (25) and (26). Let be an integer such that. Since are monotone increasing, the minimum for coincides with the minimum for under the condition that for. Then, the minimum for is not less than the minimum for. Since,wehave. This completes the proof. D. Proof of Theorem 3 Briefly, we let (A.25) (A.26) (A.27) and we use the notations in the proof of Theorem 2. Since, (34) and (A.20) yield that has to satisfy for every. Lemma 6: If and minimize,we have (A.28) (A.36)

9 66 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Equation (34) yields that By substituting (21) to (A.39), the Sherman Morrison Woodbury formula (the maix inversion lemma) [20] yields that (A.40) Let. It follows that (A.37) G. Proof of Lemma 3 When minimizes under the condition that, the variation of (A.41) (A.38) with respect to has to be 0 for any small with a Lagrange multiplier [17]. Neglecting not less than the second-degree terms of in,wehave so that is an orthonormal system. Equations (A.36) and (A.37) and Corollary 1 yield that are the optimum sampling vectors. E. Proof of Lemma 1 Equation (8) yields that This completes the proof. Equation (A.39) yields that This completes the proof. H. Proof of Lemma 4 Since,, and are symmeical maices (A.42) holds. By multiplying from both sides, we have (A.43) F. Proof of Lemma 2 Since we assume that and are uncorrelated, (2), (3), and (19) yield that It follows that so that we have (A.44) (A.45) Equation (A.45) yields that (A.39) Since,wehave. (A.46)

10 YAMASHITA: OPTIMUM SAMPLING VECTORS FOR WIENER FILTER NOISE REDUCTION 67 I. Proof of Lemma 5 We assume that and are the solutions of this problem. Suppose that and for an integer. We define real numbers and vectors as Suppose that for an integer. Then, there exists an integer such that. We define as (A.53) (else) (else) (A.47) (A.48) and let. Since and satisfy the conditions of the problem, and also satisfy them. When, using and is less than using and. From this conadiction, we can assume that there exists an integer such that if and only if. (When, we can also assume it without loss of generality.) Then, (A.19) and (A.21) yield (A.25) (A.27). The condition for is clear from (A.26) and rank. J. Proof of Lemma 6 We assume that and are the solutions of this problem. Let. is an orthogonal projection maix. Suppose that. Since is a symmeic maix, we have so that there exists an integer such that. For a real number, we define vectors as Let. From (A.19) and (A.21), it is clear that, by using, can be smaller than that by using. This conadiction yields that for every. Since if, the lemma is proved. K. Proof of Lemma 7 We assume that minimize under (A.28). For an integer, we assume that for every for the mathematical induction. Suppose that. From (A.28) and this assumption, we can consider that are included in the subspace spanned by. It is clear that is not an eigenvector of. However, if for every, is an eigenvector of. Therefore, there exists an integer such that. For a real number, we define as Let.Wehave (else). (A.54) It is clear that (A.49) (A.50) (A.55) (A.56) for all so that satisfy (A.28). Since, there exists such that Let. Since is a non-negative definite maix, yields that and. Then, there exists such that and for all. From (A.19) and (A.21),, by using, can be smaller than by using. This conadiction yields that. Therefore, we assume that the range of coincides with the subspace spanned by eigenvectors of, where if. Let be real numbers such that (A.51) Since and both and are orthonormal bases of the range of,wehave (A.52) Since is upwards convex, (A.28) yields that (A.57) (A.58) Equation (A.58) yields that. We can prove this lemma by using the mathematical induction. REFERENCES [1] C. E. Shannon, Communications in the presence of noise, Proc. IRE, vol. 37, pp , Jan [2] H. P. Kramer, A generalized sampling theorem, J. Math. Phys., vol. 38, no. 1, pp , Apr [3] A. Papoulis, Error analysis in sampling theory, Proc. IEEE, vol. 54, no. 7, pp , July [4] A. Jerry, The Shannon sampling theorem-its various extensions and applications: A tutorial review, Proc. IEEE, vol. 65, pp , Nov [5] H. Ogawa, A generalized sampling theorem, Trans. IEICE, Part A, vol. J71-A, no. 2, pp , Feb

11 68 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 [6] A. I. Zayed, Advances in Shannon s Sampling Theory. New York: CRC, [7] J. R. Higgins, Sampling Theory in Fourier and Signal Analysis Foundations. Oxford, U.K.: Clarendon, [8] Y. Yamashita and H. Ogawa, Relative Karhunen-Loève ansform and optimum sampling, in Proc. Forth Int. Conf. Optim.: Techn. Appl., vol. 2, July 1 3, 1998, pp [9] P. A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach. Englewood Cliffs, NJ: Prentice-Hall, [10] E. Oja, Subspace Methods of Pattern Recognition. Letchworth, Hertfordshire, U.K.: Research Studies, [11] H. Ogawa and E. Oja, Projection filter, Wiener filter and Karhunen- Loève subspaces in digital image restoration, J. Math. Anal. Appl., vol. 114, no. 1, pp , Feb [12] H. Ogawa, Karhunen-Loève subspace, in Proc. 11th IAPR Int. Conf. Patt. Recogn., vol. 2, The Hague, The Netherlands, Aug. Sept. 30 3, 1992, pp [13] Y. Yamashita and H. Ogawa, Relative Karhunen-Loève ansform, IEEE Trans. Signal Processing, vol. 44, pp , Feb [14] Y. Hua and W. Q. Liu, Generalized Karhunen-Loève ansform, IEEE Signal Processing Lett., vol. 5, pp , June [15] K. I. Diamantaras and S. Y. Kung, Principal Component Neural Networks: Theory and Applications. New York: Wiley, [16] K. I. Diamantaras, K. Hornik, and M. G. Sintzis, Optimum linear compression under unreliable representation and robust PCA neural models, IEEE Trans. Neural Networks, vol. 10, Sept [17] D. G. Luenberger, Optimization by Vector Space Methods. New York: Wiley, [18] A. Albert, Regression and the Moore-Penrose Pseudo-Inverse. London, U.K.: Academic, [19] A. Ben-Israel and T. N. E. Greville, Generalized Inverses: Theory and Applications. New York: Wiley, [20] W. W. Hager, Updating the inverse of a maix, SIAM Rev., vol. 31, no. 2, pp , June Yukihiko Yamashita (M 94) was born in 1960 in Kanagawa, Japan. He received the B.E., the M.E., and the Dr. Eng. degrees from Tokyo Institute of Technology, Tokyo, Japan, in 1983, 1985, and 1993, respectively. From 1985 to 1988, he was with the Japan Atomic Energy Research Institute. From 1988 to 1989, he was with the ISAC Corporation. In 1989, he joined the faculty of the Tokyo Institute of Technology, where he is now an associated professor with the Graduate School of Science and Engineering. His research interests include pattern recognition and image processing. Dr. Yamashita received a Paper Award in 1993 from the Institute of Eleconics, Information, and Communication Engineers of Japan (IEICE). He is a member of the Institute of Eleconics, Information, and Communication Engineers of Japan, Information Processing Society of Japan, and the Audio Visual Information Research Group of Japan.

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Optimal Reduced-Rank Estimation and Filtering

Optimal Reduced-Rank Estimation and Filtering IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 3, MARCH 2001 457 Optimal Reduced-Rank Estimation and Filtering Yingbo Hua, Senior Member, IEEE, Maziar Nikpour, and Pee Stoica, Fellow, IEEE Absact

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Separation of the EEG Signal using Improved FastICA Based on Kurtosis Contrast Function

Separation of the EEG Signal using Improved FastICA Based on Kurtosis Contrast Function Australian Journal of Basic and Applied Sciences, 5(9): 2152-2156, 211 ISSN 1991-8178 Separation of the EEG Signal using Improved FastICA Based on Kurtosis Contrast Function 1 Tahir Ahmad, 2 Hjh.Norma

More information

The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform

The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform Item Type text; Proceedings Authors Jones, H. W.; Hein, D. N.; Knauer, S. C. Publisher International Foundation

More information

Maximum variance formulation

Maximum variance formulation 12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal

More information

The Family of Regularized Parametric Projection Filters for Digital Image Restoration

The Family of Regularized Parametric Projection Filters for Digital Image Restoration IEICE TRANS. FUNDAMENTALS, VOL.E82 A, NO.3 MARCH 1999 527 PAPER The Family of Regularized Parametric Projection Filters for Digital Image Restoration Hideyuki IMAI, Akira TANAKA, and Masaaki MIYAKOSHI,

More information

THE PRINCIPAL components analysis (PCA), also

THE PRINCIPAL components analysis (PCA), also IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 37, NO. 1, JANUARY 1999 297 Neural Networks for Seismic Principal Components Analysis Kou-Yuan Huang, Senior Member, IEEE Abstract The neural network,

More information

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems 668 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2002 The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems Lei Zhang, Quan

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Approximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise

Approximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise IEICE Transactions on Information and Systems, vol.e91-d, no.5, pp.1577-1580, 2008. 1 Approximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise Masashi Sugiyama (sugi@cs.titech.ac.jp)

More information

Filter Design for Linear Time Delay Systems

Filter Design for Linear Time Delay Systems IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001 2839 ANewH Filter Design for Linear Time Delay Systems E. Fridman Uri Shaked, Fellow, IEEE Abstract A new delay-dependent filtering

More information

L p Approximation of Sigma Pi Neural Networks

L p Approximation of Sigma Pi Neural Networks IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 6, NOVEMBER 2000 1485 L p Approximation of Sigma Pi Neural Networks Yue-hu Luo and Shi-yi Shen Abstract A feedforward Sigma Pi neural networks with a

More information

Linear Heteroencoders

Linear Heteroencoders Gatsby Computational Neuroscience Unit 17 Queen Square, London University College London WC1N 3AR, United Kingdom http://www.gatsby.ucl.ac.uk +44 20 7679 1176 Funded in part by the Gatsby Charitable Foundation.

More information

Order Selection for Vector Autoregressive Models

Order Selection for Vector Autoregressive Models IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 2, FEBRUARY 2003 427 Order Selection for Vector Autoregressive Models Stijn de Waele and Piet M. T. Broersen Abstract Order-selection criteria for vector

More information

A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation

A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation 710 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 48, NO 6, JUNE 2001 A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation Tingshu

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 1, JANUARY

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 1, JANUARY IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 1, JANUARY 2005 121 Two-Channel Consained Least Squares Problems: Solutions Using Power Methods Connections with Canonical Coordinates Ali Pezeshki,

More information

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that

More information

ACTIVE noise control (ANC) ([1], [2]) is an established

ACTIVE noise control (ANC) ([1], [2]) is an established 286 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005 Convergence Analysis of a Complex LMS Algorithm With Tonal Reference Signals Mrityunjoy Chakraborty, Senior Member, IEEE,

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

Constrained Projection Approximation Algorithms for Principal Component Analysis

Constrained Projection Approximation Algorithms for Principal Component Analysis Constrained Projection Approximation Algorithms for Principal Component Analysis Seungjin Choi, Jong-Hoon Ahn, Andrzej Cichocki Department of Computer Science, Pohang University of Science and Technology,

More information

BASED on the minimum mean squared error, Widrow

BASED on the minimum mean squared error, Widrow 2122 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 8, AUGUST 1998 Total Least Mean Squares Algorithm Da-Zheng Feng, Zheng Bao, Senior Member, IEEE, and Li-Cheng Jiao, Senior Member, IEEE Abstract

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Closed-Form Design of Maximally Flat IIR Half-Band Filters

Closed-Form Design of Maximally Flat IIR Half-Band Filters IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 49, NO. 6, JUNE 2002 409 Closed-Form Design of Maximally Flat IIR Half-B Filters Xi Zhang, Senior Member, IEEE,

More information

IN THIS PAPER, we consider a class of continuous-time recurrent

IN THIS PAPER, we consider a class of continuous-time recurrent IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds

More information

Weighted Least-Squares Method for Designing Variable Fractional Delay 2-D FIR Digital Filters

Weighted Least-Squares Method for Designing Variable Fractional Delay 2-D FIR Digital Filters 114 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 47, NO 2, FEBRUARY 2000 Weighted Least-Squares Method for Designing Variable Fractional Delay 2-D FIR Digital

More information

c Springer, Reprinted with permission.

c Springer, Reprinted with permission. Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent

More information

Signal Analysis. Principal Component Analysis

Signal Analysis. Principal Component Analysis Multi dimensional Signal Analysis Lecture 2E Principal Component Analysis Subspace representation Note! Given avector space V of dimension N a scalar product defined by G 0 a subspace U of dimension M

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

Principal Component Analysis CS498

Principal Component Analysis CS498 Principal Component Analysis CS498 Today s lecture Adaptive Feature Extraction Principal Component Analysis How, why, when, which A dual goal Find a good representation The features part Reduce redundancy

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

COMPLEX SIGNALS are used in various areas of signal

COMPLEX SIGNALS are used in various areas of signal IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 2, FEBRUARY 1997 411 Second-Order Statistics of Complex Signals Bernard Picinbono, Fellow, IEEE, and Pascal Bondon, Member, IEEE Abstract The second-order

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

MOMENT functions are used in several computer vision

MOMENT functions are used in several computer vision IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 8, AUGUST 2004 1055 Some Computational Aspects of Discrete Orthonormal Moments R. Mukundan, Senior Member, IEEE Abstract Discrete orthogonal moments

More information

Face Recognition Using Multi-viewpoint Patterns for Robot Vision

Face Recognition Using Multi-viewpoint Patterns for Robot Vision 11th International Symposium of Robotics Research (ISRR2003), pp.192-201, 2003 Face Recognition Using Multi-viewpoint Patterns for Robot Vision Kazuhiro Fukui and Osamu Yamaguchi Corporate Research and

More information

Information-Preserving Transformations for Signal Parameter Estimation

Information-Preserving Transformations for Signal Parameter Estimation 866 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 7, JULY 2014 Information-Preserving Transformations for Signal Parameter Estimation Manuel Stein, Mario Castañeda, Amine Mezghani, and Josef A. Nossek Abstract

More information

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis 84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.

More information

KLT for transient signal analysis

KLT for transient signal analysis The Labyrinth of the Unepected: unforeseen treasures in impossible regions of phase space Nicolò Antonietti 4/1/17 Kerastari, Greece May 9 th June 3 rd Qualitatively definition of transient signals Signals

More information

QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics

QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics 60 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 48, NO 1, JANUARY 2000 QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics Xiaohua Li and H (Howard) Fan, Senior

More information

On the Second-Order Statistics of the Weighted Sample Covariance Matrix

On the Second-Order Statistics of the Weighted Sample Covariance Matrix IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 2, FEBRUARY 2003 527 On the Second-Order Statistics of the Weighted Sample Covariance Maix Zhengyuan Xu, Senior Member, IEEE Absact The second-order

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

A Generalized Subspace Approach for Enhancing Speech Corrupted by Colored Noise

A Generalized Subspace Approach for Enhancing Speech Corrupted by Colored Noise 334 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL 11, NO 4, JULY 2003 A Generalized Subspace Approach for Enhancing Speech Corrupted by Colored Noise Yi Hu, Student Member, IEEE, and Philipos C

More information

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

An Invariance Property of the Generalized Likelihood Ratio Test

An Invariance Property of the Generalized Likelihood Ratio Test 352 IEEE SIGNAL PROCESSING LETTERS, VOL. 10, NO. 12, DECEMBER 2003 An Invariance Property of the Generalized Likelihood Ratio Test Steven M. Kay, Fellow, IEEE, and Joseph R. Gabriel, Member, IEEE Abstract

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Contents. Acknowledgments

Contents. Acknowledgments Table of Preface Acknowledgments Notation page xii xx xxi 1 Signals and systems 1 1.1 Continuous and discrete signals 1 1.2 Unit step and nascent delta functions 4 1.3 Relationship between complex exponentials

More information

Kazuhiro Fukui, University of Tsukuba

Kazuhiro Fukui, University of Tsukuba Subspace Methods Kazuhiro Fukui, University of Tsukuba Synonyms Multiple similarity method Related Concepts Principal component analysis (PCA) Subspace analysis Dimensionality reduction Definition Subspace

More information

Fast adaptive ESPRIT algorithm

Fast adaptive ESPRIT algorithm Fast adaptive ESPRIT algorithm Roland Badeau, Gaël Richard, Bertrand David To cite this version: Roland Badeau, Gaël Richard, Bertrand David. Fast adaptive ESPRIT algorithm. Proc. of IEEE Workshop on Statistical

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

HOPFIELD neural networks (HNNs) are a class of nonlinear

HOPFIELD neural networks (HNNs) are a class of nonlinear IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Karhunen Loéve Expansion of a Set of Rotated Templates

Karhunen Loéve Expansion of a Set of Rotated Templates IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 7, JULY 2003 817 Karhunen Loéve Expansion of a Set of Rotated Templates Matjaž Jogan, Student Member, IEEE, Emil Žagar, and Aleš Leonardis, Member, IEEE

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

Acomplex-valued harmonic with a time-varying phase is a

Acomplex-valued harmonic with a time-varying phase is a IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998 2315 Instantaneous Frequency Estimation Using the Wigner Distribution with Varying and Data-Driven Window Length Vladimir Katkovnik,

More information

Lapped Unimodular Transform and Its Factorization

Lapped Unimodular Transform and Its Factorization IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 50, NO 11, NOVEMBER 2002 2695 Lapped Unimodular Transform and Its Factorization See-May Phoong, Member, IEEE, and Yuan-Pei Lin, Member, IEEE Abstract Two types

More information

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

A Generalized Reverse Jacket Transform

A Generalized Reverse Jacket Transform 684 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 48, NO. 7, JULY 2001 A Generalized Reverse Jacket Transform Moon Ho Lee, Senior Member, IEEE, B. Sundar Rajan,

More information

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL 11, NO 2, APRIL 2003 271 H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions Doo Jin Choi and PooGyeon

More information

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997 798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 44, NO 10, OCTOBER 1997 Stochastic Analysis of the Modulator Differential Pulse Code Modulator Rajesh Sharma,

More information

Article (peer-reviewed)

Article (peer-reviewed) Title Author(s) Influence of noise intensity on the spectrum of an oscillator Swain, Rabi Sankar; Gleeson, James P.; Kennedy, Michael Peter Publication date 2005-11 Original citation Type of publication

More information

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for Frames and pseudo-inverses. Ole Christensen 3 October 20, 1994 Abstract We point out some connections between the existing theories for frames and pseudo-inverses. In particular, using the pseudo-inverse

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

EDDY-CURRENT nondestructive testing is commonly

EDDY-CURRENT nondestructive testing is commonly IEEE TRANSACTIONS ON MAGNETICS, VOL. 34, NO. 2, MARCH 1998 515 Evaluation of Probe Impedance Due to Thin-Skin Eddy-Current Interaction with Surface Cracks J. R. Bowler and N. Harfield Abstract Crack detection

More information

A Statistical Analysis of Fukunaga Koontz Transform

A Statistical Analysis of Fukunaga Koontz Transform 1 A Statistical Analysis of Fukunaga Koontz Transform Xiaoming Huo Dr. Xiaoming Huo is an assistant professor at the School of Industrial and System Engineering of the Georgia Institute of Technology,

More information

Maximally Flat Lowpass Digital Differentiators

Maximally Flat Lowpass Digital Differentiators Maximally Flat Lowpass Digital Differentiators Ivan W. Selesnick August 3, 00 Electrical Engineering, Polytechnic University 6 Metrotech Center, Brooklyn, NY 0 selesi@taco.poly.edu tel: 78 60-36 fax: 78

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Moore-Penrose s inverse and solutions of linear systems

Moore-Penrose s inverse and solutions of linear systems Available online at www.worldscientificnews.com WSN 101 (2018) 246-252 EISSN 2392-2192 SHORT COMMUNICATION Moore-Penrose s inverse and solutions of linear systems J. López-Bonilla*, R. López-Vázquez, S.

More information

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Seema Sud 1 1 The Aerospace Corporation, 4851 Stonecroft Blvd. Chantilly, VA 20151 Abstract

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

SIO 211B, Rudnick, adapted from Davis 1

SIO 211B, Rudnick, adapted from Davis 1 SIO 211B, Rudnick, adapted from Davis 1 XVII.Empirical orthogonal functions Often in oceanography we collect large data sets that are time series at a group of locations. Moored current meter arrays do

More information

Application of Principal Component Analysis to TES data

Application of Principal Component Analysis to TES data Application of Principal Component Analysis to TES data Clive D Rodgers Clarendon Laboratory University of Oxford Madison, Wisconsin, 27th April 2006 1 My take on the PCA business 2/41 What is the best

More information

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

THE least mean-squares (LMS) algorithm is a popular algorithm

THE least mean-squares (LMS) algorithm is a popular algorithm 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 53, NO 7, JULY 2005 Partial Update LMS Algorithms Mahesh Godavarti, Member, IEEE, Alfred O Hero, III, Fellow, IEEE Absact Partial updating of LMS filter

More information

Statistics for Social and Behavioral Sciences

Statistics for Social and Behavioral Sciences Statistics for Social and Behavioral Sciences Advisors: S.E. Fienberg W.J. van der Linden For other titles published in this series, go to http://www.springer.com/series/3463 Haruo Yanai Kei Takeuchi

More information

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 9, NO. 4, MAY 2001 411 A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces Paul M. Baggenstoss, Member, IEEE

More information

Principal Components Analysis (PCA)

Principal Components Analysis (PCA) Principal Components Analysis (PCA) Principal Components Analysis (PCA) a technique for finding patterns in data of high dimension Outline:. Eigenvectors and eigenvalues. PCA: a) Getting the data b) Centering

More information

Independent Component Analysis and Its Application on Accelerator Physics

Independent Component Analysis and Its Application on Accelerator Physics Independent Component Analysis and Its Application on Accelerator Physics Xiaoying Pang LA-UR-12-20069 ICA and PCA Similarities: Blind source separation method (BSS) no model Observed signals are linear

More information

Efficient and Accurate Rectangular Window Subspace Tracking

Efficient and Accurate Rectangular Window Subspace Tracking Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Performance of Reduced-Rank Linear Interference Suppression

Performance of Reduced-Rank Linear Interference Suppression 1928 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 5, JULY 2001 Performance of Reduced-Rank Linear Interference Suppression Michael L. Honig, Fellow, IEEE, Weimin Xiao, Member, IEEE Abstract The

More information

GAUSSIAN PROCESS TRANSFORMS

GAUSSIAN PROCESS TRANSFORMS GAUSSIAN PROCESS TRANSFORMS Philip A. Chou Ricardo L. de Queiroz Microsoft Research, Redmond, WA, USA pachou@microsoft.com) Computer Science Department, Universidade de Brasilia, Brasilia, Brazil queiroz@ieee.org)

More information

Fixed-Order Robust H Filter Design for Markovian Jump Systems With Uncertain Switching Probabilities

Fixed-Order Robust H Filter Design for Markovian Jump Systems With Uncertain Switching Probabilities IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 4, APRIL 2006 1421 Fixed-Order Robust H Filter Design for Markovian Jump Systems With Uncertain Switching Probabilities Junlin Xiong and James Lam,

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

THE representation of a signal as a discrete set of

THE representation of a signal as a discrete set of IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 3, MARCH 1998 587 Signal Representation for Compression and Noise Reduction Through Frame-Based Wavelets Laura Rebollo-Neira, Anthony G. Constantinides,

More information

Title algorithm for active control of mul IEEE TRANSACTIONS ON AUDIO SPEECH A LANGUAGE PROCESSING (2006), 14(1):

Title algorithm for active control of mul IEEE TRANSACTIONS ON AUDIO SPEECH A LANGUAGE PROCESSING (2006), 14(1): Title Analysis of the filtered-x LMS algo algorithm for active control of mul Author(s) Hinamoto, Y; Sakai, H Citation IEEE TRANSACTIONS ON AUDIO SPEECH A LANGUAGE PROCESSING (2006), 14(1): Issue Date

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Impulsive Stabilization for Control and Synchronization of Chaotic Systems: Theory and Application to Secure Communication

Impulsive Stabilization for Control and Synchronization of Chaotic Systems: Theory and Application to Secure Communication 976 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 44, NO. 10, OCTOBER 1997 Impulsive Stabilization for Control and Synchronization of Chaotic Systems: Theory and

More information

Estimation Error Bounds for Frame Denoising

Estimation Error Bounds for Frame Denoising Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

On the Use of A Priori Knowledge in Adaptive Inverse Control

On the Use of A Priori Knowledge in Adaptive Inverse Control 54 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 47, NO 1, JANUARY 2000 On the Use of A Priori Knowledge in Adaptive Inverse Control August Kaelin, Member,

More information

A Coupled Helmholtz Machine for PCA

A Coupled Helmholtz Machine for PCA A Coupled Helmholtz Machine for PCA Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 3 Hyoja-dong, Nam-gu Pohang 79-784, Korea seungjin@postech.ac.kr August

More information

Asymptotic Achievability of the Cramér Rao Bound For Noisy Compressive Sampling

Asymptotic Achievability of the Cramér Rao Bound For Noisy Compressive Sampling Asymptotic Achievability of the Cramér Rao Bound For Noisy Compressive Sampling The Harvard community h made this article openly available. Plee share how this access benefits you. Your story matters Citation

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information