A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

Size: px
Start display at page:

Download "A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing"

Transcription

1 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang Abstract This brief paper proposes a cross-associative neural network (CANN) for singular value decomposition (SVD) of a nonsquared data matrix in signal processing, in order to improve the convergence speed avoid the potential instability of the deterministic networks associated with the cross-correlation neural-network models. We study the global asymptotic stability of the network for tracking all the singular components, show that the selection of its learning rate in iterative algorithm is independent of the singular value distribution of a nonsquared matrix. The performances of CANN are shown via simulations. Index Terms Cross-associative neural network (CANN), global asymptotic stability, learning rate, signal processing, singular value decomposition (SVD). I. INTRODUCTION SIGNAL processing approaches based on singular value decomposition (SVD) of a data matrix or correlation matrix are usually robust [1]. Many signal-processing tasks can efficiently be achieved by SVD of a nonsquared matrix. Due to the importance of SVD in signal processing, a variety of iterative methods have been proposed by researchers who are experts in matrix algebra [2] [5]. These algorithms of updating SVD for tracking subspace can get the exact or approximate SVD of a nonsquared data matrix in a low complexity per update. On the other h, neural networks have provided effective parallel processing methods for algebraic computations such as the principal component analysis [6] [9], [15]. Neural networks can also provide an alternative approach for SVD of nonsquared matrix [10] [14]. By continuation of Oja s algorithm, Yuile et al. [10] Samardzija et al. [11] developed several recurrent neural networks, which extract the principal components of autocorrelation matrix of rom data streams. These neural networks can also obtain SVD of a nonsquared matrix, if only if their weight matrix is taken as or. However, if the data matrix is ill conditioned, then the operation or usually is numerically unstable should be avoided [1]. The gradient flows based on the least squares measure of differential equations for SVD [13], [14], [16] [19] are proved to be asymptotically convergent if all the singular values of are distinct. It is worth mentioning that Diamantaras Kung [20] proposed the cross-correlation neural-network models that can be directly used for extracting the cross-correlation features be- Manuscript received February 8, 2000; revised September 27, This work was supported in part by the National Science Foundation of China ( ). The authors are with Key Laboratory of Radar Signal Processing, Xidian University, Xi an , P.R. China ( dzfeng@rsp.xidian.edu.cn). Publisher Item Identifier S (01)05531-X. tween two high-dimensional data streams. The networks can efficiently extract the principal cross correlation features between two multidimensional time series in real time, whereas their deterministic form can directly be used for performing SVD of a nonsquared matrix. However, the cross-correlation neural network models are sometimes divergent for some initial state [21]. Moreover, both analytical experimental studies show that convergence of the above neural networks depends on the appropriate selection of the learning rate, but it is difficult to be determined in advance, since the learning rate are directly related to the underlying matrix. Hence, it is important to find a neural-network model with the fixed learning rate that can be chosen in advance. In order to improve the convergence speed eliminate the potential instability of the deterministic form of the cross-correlation neural-network models (DFCNNs) [20], we propose a cross-associative neural network (CANN) in which the learning rate is independent of the singular value distribution of a nonsquared data or cross-correlation matrix. The performances of the CANN are evaluated via computer simulations. Compared with the DFCNN, the CANN has two remarkable advantages: 1) its learning rate can be a fixed constant independent of the singular value distribution of the underlying matrix, which evidently increases the convergent speed of the CANN; 2) its state vectors have the unit-norm conservation. II. A NOVEL RECURRENT NEURAL NETWORK Consider a -dimensional sequence with sampling number large enough. If is stationary, its subspace can be extracted from its sampling data that can form the nonsquared data matrix ; when is nonstationary, the rank-2 modification of the data matrix is written as. It is worth mentioning that when sampling number is small, it is more suitable to extract directly the signal subspace from the data matrix. The rank-2 update of the autocorrelation matrix often makes the smallest eigenvalue tend to zero or a small negative value [22]. In this brief paper, we consider a neural network for performing SVD of a data matrix. Given a nonsquared matrix, without loss of generality, let. Let, be the th singular value, left singular vector right singular vector, respectively,. All the, is called the th singular component, then the stard SVD of is its equivalent form in which all the singular values are positive, where both are unitary, de /01$ IEEE

2 1216 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 notes the diagonal matrix made up of all the singular values [1]. Noticeably, in algebra, the SVD of has some nonstard forms that are described by or their equivalent form in which any nonzero singular value may be positive or negative. Our objective is to perform the (stard or nonstard) SVD of. The most direct methods are the matrix-algebra-based methods [2] [5], while the SVD can also be obtained by using the recurrent neural network such as the DFCNN [20]. We propose the following recurrent network for SVD of : (1a) (1b) for, where,, the superscript denotes transposition, (2a) (2b) The design objective of (1) is to make as. in (2b) is called the deflation transformation [20]. The design objective of (2) is to make in order to let as. In fact, if as, we directly verify that (2) can achieve its design objective. Since the second terms in right side of [20, (19) (20)] are indefinite, the DFCNN has the potential instability. Fortunately, the second terms in right side of (1) are the higher order decay terms compared with the first terms they can govern the convergence of (1), we expect that the above neural network is globally convergent. More importantly, the learning rate or time step length of the iteration algorithm for solving (1) can be taken a constant independent of the singular value distribution of (also see Remark 1). That is to say, compared with the DFCNN in [20], the recurrent network speeds up the convergence avoids the potential instability of the DFCNN [20]. The neural network for finding the first singular component is shown in Fig. 1. Its adaptability is indicated by change of connection weights with data matrix. Thus, the entire neural network has complex topologies. The iterative algorithm corresponding to (1) is (3a) (3b) Fig. 1. Block diagram of the neural network for tracking the first singular component. for, where is called the time-step length or learning rate. It is easily known that convergence of (3) greatly replies on the learning rate. An important problem is how to choose a good learning rate. It is worth noting that in the neural-network literature, many algorithms can be extended to solve the SVD problem, but their learning rate are difficult to be determined in advance. Hence, the main reason to adopt (1) is that the learning rate in (3) can be selected in advance. Moreover, in order to avoid dividing by zero, the initial state vectors should appropriately been selected so that. Once obtaining all the singular components associated with all the nonzero singular values, we can also obtain those associated with the zero singular values by Gram Schmidt orthogonalization. At this point, we establish an essential result. Lemma 1: Given (1) arbitrary initial values, then converge exponentially to 1as, their convergence is independent of the matrix. The proof of Lemma 1 is given in Appendix A. Remark 1: We refer to if as the unit-norm conservation. As seen from (A.2a) (A.2b), if, then there are always for any finite positive ; similarly, if, then there are always for any finite positive. Moreover, since the analytical solution (A.2a) (A.2b) is independent of the data matrix, the convergence of is uniform with respect to the data matrix. Importantly, since the decay rate of linear equation (A.1a) (A.1b) is 2, the suitable learning rate in (3) can be taken as. In order to guarantee that iteration algorithm (3) has the reliable, fast global convergence, we take the fixed learning rate that is confirmed to be suitable through many simulation tests. From Lemma 1, we directly deduce the following Corollary. Corollary 1 (Boundedness): For any bounded initial values, the state vectors of non- linear system (1) are bounded.

3 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER III. STABILITY THEORY First, we consider stability of the first component in (1) assume that is stationary. We will prove that the stable equilibrium point set of the first component case in (1) is given by or (4a) (4b) where has the distinct nonzero singular values with multiplicity ( ). It is easily shown that (4b) is equivalent to (4a). It is easily shown that for, is a set made up of the finite points, the direction of the first singular vectors can uniquely be determined, while for, is continuous, the choice of the first singular vectors is not unique. Any point in can be regarded as the first singular component. Hence, once such a point in is obtained, we can get the first singular component. Two distinct subsets in are defined as (5a) Theorem 1 (Globally Asymptotic Stability): Let nonzero singular values of be, with the corresponding normalized left singular vectors right singular vectors, respectively. Assume or for. Then in (1) globally asymptotically converges to as. Proof: From (1) (2) it is known that the state vectors are governed by through, while are not affected by. This feature provides the convenience for analysis. By Lemma 2, it is known that, hence we have,as. Once approaches, the repeated use of lemma 2gives as, which is the stability of (1) when. Mimicking this process, we can prove the stability of (1) when. This completes the proof of Theorem 1. Remark 3: We may expect that (1) perform the stard SVD, which is achieved by the following method: if the stable state vectors of (1) satisfy condition, then is replaced by. IV. SIMULATIONS Two simulations are presented. In Simulation 1, SVD of an ill-conditioned matrix is used to evaluate the efficiency of this parallel neural network. In Simulation 2, the solution of a TLS problem is obtained by neural-network-based SVD for nonsquared data matrix. Simulation 1: Consider the following matrix: (6) (5b) There is obviously, corresponds to the stard SVD of. Lemma 2 (Globally Asymptotic Stability of the First Component): Let nonzero singular values of be, with the corresponding normalized left singular vectors right singular vectors, respectively. Furthermore, let have the distinct singular values with multiplicity. Assume or for any. Then in (1) globally asymptotically converges to a point in as. Proof: See Appendix B. Remark 2: From Lemma 2 Result 3 in Appendix B, we conclude that when, in (1) globally asymptotically converges to a point in, while, in (1) globally asymptotically converges to a point in,as. where ( ) are the th components of 11- nine-dimensional orthogonal discrete cosine basis functions, respectively, i.e.,. (7a) (7b) The matrix given by (6) has nine nonzero singular values in which the entire three distinct singular values 10, have multiplicity 3. The data matrix is ill conditioned, since its condition number is [1]. We used three methods to compute the SVD of in (6). The estimation errors are defined as follows.

4 1218 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER ) The CANN: (8) 2) The DFCNN [20]: 3) The Moore s neural network [17]: (9) (a) (10) In the DFCNN Moore s neural networks, the suitable fixed time step length takes, the variables parameters in (9) (10) can be found in [20] [17], respectively. Romly generating the initial state vectors with unit-norm, the MATLAB software obtains the simulation results. The evolution curves of all the errors against iteration number shown in Fig. 2. The CANN has emerged as a useful neural-networkbased technique, when the underlying matrix is close to a singular matrix. Simulation 2: Give the following problem of identification for multiple input single output static system: (11) where is system parameter vector; is a five-dimensional white input vector; is the output. The input output samples are corrupted by additive white noise. The unbiased estimate of parameter can be gotten by the total least squares approach [1]. The estimation error is defined as, where represents the estimate of. Define the augmented output vector. The data matrix consists of 30 samples of the augmented output vector, where any row vector of the data matrix represent a sample of the augmented output vector. The evolution curves of against iteration number under cases (SNR 0.1, 0.25, 0.5) are shown in Fig. 3 from which we see that the proposed neural network can find the total least squares solution, while the DFCNN cannot. (b) V. CONCLUSION A new recurrent neural network for SVD of a nonsquared data matrix or cross-correlation matrix is proposed. The global convergence of this nonlinear system is studied. The norm of its state vectors is governed by stable ordinary differential equations independent of the data matrix globally exponentially converges to 1 in the fixed decay rate 2. Both theoretical analysis simulation results show that the time-step length or leaning rate in the iterative algorithm associated with this neural network is independent of the data matrix. The CANN has also emerged as a useful neural-network-based technique when the (c) Fig. 2. The convergent curves of all the estimation errors against iteration number, where integer 1 9 in figure (a) (b) shows the estimation errors associated with singular component 1 9. (a) ANN. (b) DFCNN. (c) Moore s approach. data matrix is close to a singular matrix. Simulation results show that there are not the significant difference of the estimation errors, computational complexity, convergent speed between the neural-network-based method the matrix-algebra-based method.

5 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER (A.1b) The analytical solutions of (A.1a) (A.1b), respectively, are given by (A.2a) (A.2b) Clearly, if, then. This completes the proof of Lemma 1. (a) APPENDIX B THE PROOF OF LEMMA 2 For convenience of analysis, let (B.1) Substituting (B.1) into (1), we can get (B.2a) (B.2b), shown at the bottom of the next page. In theory, (B.2a) (B.2b) are equivalent to (1). For the sake of notational simplicity, the single index is involved in all the below equations. For the first component, (B.2a) (B.2b) can be rewritten as (b) (B.3a) (B.3b) Result 1: in (B.3b) exponentially converges to zero at decay rate 1 as. The addition the subtraction of the two equations in (B.3a) yield (c) Fig. 3. The estimation errors against iteration number. APPENDIX A THE PROOF OF LEMMA 1 Differentiating with respect to considering (1), we can obtain (B.4) From the above equation, we directly deduce the following equations: (A.1a)

6 1220 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 (B.5a) namely (B.8b) (B.9a) (B.5b) Without loss of generality,,, are assumed. From (B.5a) (B.5b), we can obtain (B.9b) where are bounded by Corollary 1. Result 2: All the ( ) ( ) in (B.9a) (B.9b) exponentially converge to zero. With Result 1 Result 2, without the loss of generality, we assume that (B.6a) Considering the above assumption ( ), (B.3a) (B.3b) can be rewritten as (B.10) (B.6b) whose analytical solution is described by (B.7a) (B.7b) where ( ). From (B.7a) (B.7b), we can directly deduce the two inequalities: (B.8a) (B.11) Now, we only require further to study the globally asymptotic convergence of (B.11). Applying Lemma 1 or (A.2a) (A.2b) to (B.11) immediately yields (B.12a)

7 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER

BASED on the minimum mean squared error, Widrow

BASED on the minimum mean squared error, Widrow 2122 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 8, AUGUST 1998 Total Least Mean Squares Algorithm Da-Zheng Feng, Zheng Bao, Senior Member, IEEE, and Li-Cheng Jiao, Senior Member, IEEE Abstract

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Linear Heteroencoders

Linear Heteroencoders Gatsby Computational Neuroscience Unit 17 Queen Square, London University College London WC1N 3AR, United Kingdom http://www.gatsby.ucl.ac.uk +44 20 7679 1176 Funded in part by the Gatsby Charitable Foundation.

More information

A New Look at the Power Method for Fast Subspace Tracking

A New Look at the Power Method for Fast Subspace Tracking Digital Signal Processing 9, 297 314 (1999) Article ID dspr.1999.0348, available online at http://www.idealibrary.com on A New Look at the Power Method for Fast Subspace Tracking Yingbo Hua,* Yong Xiang,*

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Coupled Cross-correlation Neural Network Algorithm for Principal Singular Triplet Extraction of a Cross-covariance Matrix

Coupled Cross-correlation Neural Network Algorithm for Principal Singular Triplet Extraction of a Cross-covariance Matrix IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 3, NO. 2, APRIL 2016 149 Coupled Cross-correlation Neural Network Algorithm for Principal Singular Triplet Extraction of a Cross-covariance Matrix Xiaowei Feng,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra.

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra. QALGO workshop, Riga. 1 / 26 Quantum algorithms for linear algebra., Center for Quantum Technologies and Nanyang Technological University, Singapore. September 22, 2015 QALGO workshop, Riga. 2 / 26 Overview

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

IN this paper, we are interested in developing adaptive techniques

IN this paper, we are interested in developing adaptive techniques 1452 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 4, APRIL 2008 Fast and Stable Subspace Tracking Xenofon G. Doukopoulos, Member, IEEE, and George V. Moustakides, Senior Member, IEEE Abstract We

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Upon successful completion of MATH 220, the student will be able to:

Upon successful completion of MATH 220, the student will be able to: MATH 220 Matrices Upon successful completion of MATH 220, the student will be able to: 1. Identify a system of linear equations (or linear system) and describe its solution set 2. Write down the coefficient

More information

IN THIS PAPER, we consider a class of continuous-time recurrent

IN THIS PAPER, we consider a class of continuous-time recurrent IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Optimal Linear Estimation Fusion Part I: Unified Fusion Rules

Optimal Linear Estimation Fusion Part I: Unified Fusion Rules 2192 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 Optimal Linear Estimation Fusion Part I: Unified Fusion Rules X Rong Li, Senior Member, IEEE, Yunmin Zhu, Jie Wang, Chongzhao

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Solutions to the generalized Sylvester matrix equations by a singular value decomposition

Solutions to the generalized Sylvester matrix equations by a singular value decomposition Journal of Control Theory Applications 2007 5 (4) 397 403 DOI 101007/s11768-006-6113-0 Solutions to the generalized Sylvester matrix equations by a singular value decomposition Bin ZHOU Guangren DUAN (Center

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis 84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

MATRIX AND LINEAR ALGEBR A Aided with MATLAB

MATRIX AND LINEAR ALGEBR A Aided with MATLAB Second Edition (Revised) MATRIX AND LINEAR ALGEBR A Aided with MATLAB Kanti Bhushan Datta Matrix and Linear Algebra Aided with MATLAB Second Edition KANTI BHUSHAN DATTA Former Professor Department of Electrical

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

A Modular NMF Matching Algorithm for Radiation Spectra

A Modular NMF Matching Algorithm for Radiation Spectra A Modular NMF Matching Algorithm for Radiation Spectra Melissa L. Koudelka Sensor Exploitation Applications Sandia National Laboratories mlkoude@sandia.gov Daniel J. Dorsey Systems Technologies Sandia

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

Enforcing Passivity for Admittance Matrices Approximated by Rational Functions

Enforcing Passivity for Admittance Matrices Approximated by Rational Functions IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 1, FEBRUARY 2001 97 Enforcing Passivity for Admittance Matrices Approximated by Rational Functions Bjørn Gustavsen, Member, IEEE and Adam Semlyen, Life

More information

Optimum Sampling Vectors for Wiener Filter Noise Reduction

Optimum Sampling Vectors for Wiener Filter Noise Reduction 58 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Optimum Sampling Vectors for Wiener Filter Noise Reduction Yukihiko Yamashita, Member, IEEE Absact Sampling is a very important and

More information

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Part 1a: Inner product, Orthogonality, Vector/Matrix norm Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,

More information

Chapter 4 Systems of Linear Equations; Matrices

Chapter 4 Systems of Linear Equations; Matrices Chapter 4 Systems of Linear Equations; Matrices Section 5 Inverse of a Square Matrix Learning Objectives for Section 4.5 Inverse of a Square Matrix The student will be able to identify identity matrices

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Recurrent Neural Network Approach to Computation of Gener. Inverses

Recurrent Neural Network Approach to Computation of Gener. Inverses Recurrent Neural Network Approach to Computation of Generalized Inverses May 31, 2016 Introduction The problem of generalized inverses computation is closely related with the following Penrose equations:

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

CITS2401 Computer Analysis & Visualisation

CITS2401 Computer Analysis & Visualisation FACULTY OF ENGINEERING, COMPUTING AND MATHEMATICS CITS2401 Computer Analysis & Visualisation SCHOOL OF COMPUTER SCIENCE AND SOFTWARE ENGINEERING Topic 7 Matrix Algebra Material from MATLAB for Engineers,

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

Simultaneous State and Fault Estimation for Descriptor Systems using an Augmented PD Observer

Simultaneous State and Fault Estimation for Descriptor Systems using an Augmented PD Observer Preprints of the 19th World Congress The International Federation of Automatic Control Simultaneous State and Fault Estimation for Descriptor Systems using an Augmented PD Observer Fengming Shi*, Ron J.

More information

Identification of modal parameters from ambient vibration data using eigensystem realization algorithm with correlation technique

Identification of modal parameters from ambient vibration data using eigensystem realization algorithm with correlation technique Journal of Mechanical Science and Technology 4 (1) (010) 377~38 www.springerlink.com/content/1738-494x DOI 107/s106-010-1005-0 Identification of modal parameters from ambient vibration data using eigensystem

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel

More information

ECE 275A Homework # 3 Due Thursday 10/27/2016

ECE 275A Homework # 3 Due Thursday 10/27/2016 ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability

More information

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations

More information

Population Games and Evolutionary Dynamics

Population Games and Evolutionary Dynamics Population Games and Evolutionary Dynamics William H. Sandholm The MIT Press Cambridge, Massachusetts London, England in Brief Series Foreword Preface xvii xix 1 Introduction 1 1 Population Games 2 Population

More information

Efficient and Accurate Rectangular Window Subspace Tracking

Efficient and Accurate Rectangular Window Subspace Tracking Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH 2005 1057 Error Whitening Criterion for Adaptive Filtering: Theory and Algorithms Yadunandana N. Rao, Deniz Erdogmus, Member, IEEE, and Jose

More information

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems 1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Comparison of Modern Stochastic Optimization Algorithms

Comparison of Modern Stochastic Optimization Algorithms Comparison of Modern Stochastic Optimization Algorithms George Papamakarios December 214 Abstract Gradient-based optimization methods are popular in machine learning applications. In large-scale problems,

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

The spectral decomposition of near-toeplitz tridiagonal matrices

The spectral decomposition of near-toeplitz tridiagonal matrices Issue 4, Volume 7, 2013 115 The spectral decomposition of near-toeplitz tridiagonal matrices Nuo Shen, Zhaolin Jiang and Juan Li Abstract Some properties of near-toeplitz tridiagonal matrices with specific

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems 668 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2002 The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems Lei Zhang, Quan

More information