Structured Matrices and Solving Multivariate Polynomial Equations

Size: px
Start display at page:

Download "Structured Matrices and Solving Multivariate Polynomial Equations"

Transcription

1 Structured Matrices and Solving Multivariate Polynomial Equations Philippe Dreesen Kim Batselier Bart De Moor KU Leuven ESAT/SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium. Structured Matrix Days, XLIM, Université de Limoges. May 10-11, 2012, Limoges, France.

2 Outline 1 Introduction & Motivation 2 Solving Polynomial Systems 3 Numerical Example 4 Conclusions & Future Work

3 Outline 1 Introduction & Motivation 2 Solving Polynomial Systems 3 Numerical Example 4 Conclusions & Future Work

4 Univariate Root-finding: Sylvester Consider two univariate polynomials { f1(x) = a rx r +a r 1x r a 0 f 2(x) = b sx s +b s 1x s b 0 Build square Sylvester matrix a 0 a 1... a r s rows a 0 a 1... a r r rows or, a 0 a 1... a r b 0 b 1... b s b 0 b 1... b s b 0 b 1... b s Sk = 0 x 0 x 1 x 2. x r+s 2 x r+s 1 = /25

5 Univariate Root-finding: Sylvester Outline univariate root-finding algorithm 1) Build Sylvester matrix Sk = 0 2) Shift structure in k: ( 1 x x 2... x r+s 2 ) T x = ( x x 2 x 3... x r+s 1 ) T is written as kx = k where and represent omitting the last and the first row 3) Shift structure for all roots: KD x = K 4) One cannot compute K directly, but instead Z = KV, such that VD xv 1 = Z Z with X denoting the Moore-Penrose pseudoinverse of X. 2/25

6 Univariate Root-finding: Sylvester Example: consider the system { f1 (x) = (x 1)(x 2) = x 2 3x+2 f 2 (x) = (x 1)(x 2)(x+1) = x 3 2x 2 x+2 having two common roots x = 1 and x = 2. The Sylvester matrix S for degree d = = 4 is S = 1 x x 2 x 3 x 4 f xf x 2 f f xf /25

7 Univariate Root-finding: Sylvester The nullspace of S has a dimension of two and a numerical basis is computed using SVD Z = The eigenvalue decomposition of Z Z is = The common roots of f 1 (x) and f 2 (x) are x = 1 and x = 2. 4/25

8 Univariate Root-finding: Sylvester Huge gap between algebraic geometry literature and (numerical) linear algebra Linear algebra provides suitable framework Polynomial structure induces matrix structure Main ingredients: 1) Linearize problem by separating coefficients and monomials 2) Solutions live in the nullspace of coefficient matrix 3) Exploit structure in monomial basis 4) Eigenvalue problems Generalize to multivariate case? 5/25

9 Outline 1 Introduction & Motivation 2 Solving Polynomial Systems 3 Numerical Example 4 Conclusions & Future Work

10 Outline Algorithm and Building Macaulay Matrix M Algorithm ingredients 1) Build structured matrix M containing coefficients 2) Solutions compose vectors in nullspace M 3) Compute solutions from eigenvalue problems Example: simple polynomial system { p(x,y) = x 2 +3y 2 15 = 0 (d 1 = 2) q(x,y) = y 3x 3 2x 2 +13x 2 = 0 (d 2 = 3) Construct Macaulay matrix 1 x y x 2 xy y 2 x 3 x 2 y xy 2 y 3 p(x, y) q(x, y) x p(x,y) y p(x,y) /25

11 Outline Algorithm and Building Macaulay Matrix M Simple polynomial system { p(x,y) = x 2 +3y 2 15 = 0 q(x,y) = y 3x 3 2x 2 +13x 2 = 0 Extend Macaulay matrix M to degree d = n i=1di n+1 = 4 1 x y x 2 xy y 2 x 3 x 2 y xy 2 y 3 x 4 x 3 y x 2 y 2 xy 3 y 4 p q xp yp x 2 p xyp y 2 q xq yq Dimensions of M are determined by number of monomials N(n, d) N(n,d ) = (n+d 1)! (n 1)! d! 7/25

12 Solutions Live in Nullspace of M Solutions compose vectors in nullspace of the Macaulay matrix M where M k = 0, k = ( 1 x y x 2 xy y 2 )T is evaluated at the solutions Number of solutions follows from nullity M Stacking vectors k forms canonical nullspace K with generalized Vandermonde structure K = ( k (1) k (2) k (m B) ) Canonical nullspace K for 6 solutions (x i,y i ) x 1 x 2... x 6 y 1 y... y 6 x 2 1 x x 2 6 x 1 y 1 x 2 y 2... x 6 y 6 y1 2 y y6 2 x 3 1 x x 3 6 x 2 1 y 1 x 2 2 y 2... x 2 6 y 6 x 1 y1 2 x 2 y x 6 y6 2 y1 3 y y6 3 x 4 1 x x 4 6 x 3 1 y 1 x 3 y 2... x 3 6 y 6 x 2 1 y2 1 x 2 2 y x 2 6 y2 6 x 1 y 1 3 x 2 y x 6 y 6 3 y y y /25

13 Shift structure Shift structure in monomial basis k ( ) x = ( ) 1 x y x 2 xy y 2 1 x y x 2 xy y 2 y = ( ( Let D = diag(x 1,x 2,...,x mb ) (for finding the x-roots), then S 1 KD = S 2 K, where S 1 and S 2 select rows from K wrt. shift structure ) ) 1 x y x 2 xy y 2 1 x y x 2 xy y 2 9/25

14 Two Root-finding Algorithms Two flavours of root-finding algorithms 1) Nullspace-based root-finding: compute numerical basis and exploit shift structure 2) Data-driven root-finding: perform (Q)R-decomposition of certain columns of M Both algorithms lead to (generalized) eigenvalue problems! 10/25

15 Two Root-finding Algorithms Nullspace-based Root-finding We have S 1KD = S 2K However, the canonical nullspace K and the solutions D are not known. A basis Z (with MZ = 0) is computed, with K = ZV This leads to the generalized eigenvalue problem S 1 ZVD = S 2 ZV or S 1 Z ( VDV 1) = S 2 Z 11/25

16 Two Root-finding Algorithms Algorithm summary nullspace-based root-finding 1) Construct Macaulay matrix M for degree d 2) Compute basis for nullspace of M as Z 3) Choose shift function, e.g., x 4) Write down shift relation using row selection matrices S 1 and S 2 5) Solve the generalized eigenvalue problem S 1 Z ( VDV 1) = S 2 Z The eigenvalues correspond to the, e.g., x-solutions 6) Reconstruct canonical kernel K = ZV 12/25

17 Two Root-finding Algorithms Data-driven Root-finding Perform partitioning of M such that or, ( M1 M 2 ) ( K 1 K 2 ) = 0, M 1 K 1 +M 2 K 2 = 0, where M 2 is of full column rank, and K 1 is square and of full rank. This leads to K 2 = M 2 M 1K 1, where X denotes the Moore-Penrose pseudoinverse of X. 13/25

18 Two Root-finding Algorithms The shift relation S 1 KD σ = S 2 K is written as K 1 D σ = = ( ) Σ1 K 1 ( Σ 2 K 2 Σ 1 Σ 2 M 2 M 1 ) K 1...necessary to compute entire M 2 M 1?! 14/25

19 Two Root-finding Algorithms We have ( K 1 D σ = Identify two kinds of shifts: Σ 1 Σ 2 M 2 M 1 ) K 1 1) Trivial shifts map mons from K 1 to other mons in K 1 2) Nontrivial shifts map mons from K 1 to K 2 = M 2 M 1K 1 We are interested only in a certain part of M 2 M 1! 15/25

20 Two Root-finding Algorithms Reorder once again the columns of M in the expression MK = 0 ( ) Σ 2 K 2 M22 M 21 M 1 Σ 2 K 2 = 0 K 1 (Q)R helps us to solve for the rows of interest Σ 2 K 2 : R 11 R 12 R 13 R 22 R 23 R 33 Σ 2 K 2 Σ 2 K 2 K 1 = 0 The root-finding problem is phrased as the eigenvalue problem ( ) Σ K 1 D σ = 1 R22 1 R K /25

21 Two Root-finding Algorithms Algorithm summary data-driven root-finding 1) Construct Macaulay matrix M for degree d 2) Partition M = ( M 1 M 2 ) and K = ( K T 1 K T 2 3) Choose shift function σ, e.g., σ = x 4) Identify trivial and nontrivial shifts 5) Perform (Q)R on reordered M to obtain rows of interest R 11 R 12 R 13 R 22 R 23 Σ 2K 2 Σ 2 K 2 = 0 R 33 K 1 6) Solve eigenvalue problem ( K 1 D σ = Σ 1 R 1 22 R 23 ) K 1 ) T 17/25

22 Sparse Nullspace Computation: Motzkin elimination Exploiting sparsity in M based on Motzkin elimination [7] Main idea: pairwise eliminations with nonzero elements Consider a sparse matrix ( ) M = Process the first row: a 1 W 1 = ( ) /25

23 Sparse Nullspace Computation: Motzkin elimination Process the second row: leading to b 2 = a 2 W 1 = ( ) b 2 W 2 = ( ) Finally the full nullspace is found as Z = m i=1 W i 19/25

24 Outline 1 Introduction & Motivation 2 Solving Polynomial Systems 3 Numerical Example 4 Conclusions & Future Work

25 TLS vs W/STLS Total Least Squares [10, 4] Weighted/Structured TLS [1, 2] minimize B,v subject to Bv = 0 (a ij b ij ) 2 i j v T v = 1 Singlar Value Decomposition A = UΣV T Av = uσ A T u = vσ v T v = 1 u T u = 1 Eigenvalue problem! minimize B,v subject to Bv = 0 (a ij b ij ) 2 w ij i j (B structured) v T v = 1 Riemannian Singlar Value Decomposition Av = T vt T v l A T l = T l T T l v v T v = 1 System of polynomial equations! 20/25

26 Applications W/STLS Applications W/STLS Systems and control: [1], [2], [6] Machine learning: [8] Information retrieval: [5] Statistics: [3]... e ũ u 0 y 0 System ỹ w = u 0 +ũ z = y 0 +ỹ 21/25

27 STLS Problem 3 3 Hankel STLS minimize v subject to v T v = 1. v T A T (T v T T v ) 1 Av A = ( ) B = ( ) 22/25

28 Outline 1 Introduction & Motivation 2 Solving Polynomial Systems 3 Numerical Example 4 Conclusions & Future Work

29 Many Applications Polynomial root-finding arises in Polynomial Optimization Problems Model order reduction Identifiability analysis of nonlinear models Signal Processing Kinematic problems in robotics Computational Biology: conformation of molecules Algebraic Statistics... 23/25

30 Summary and Future Work Translation algebraic geometry problem to (numerical) linear algebra Two kinds of root-finding algorithms Polynomial system solving lead to eigenvalue decompositions as in [9] Exploiting sparsity and structure, e.g., FFT-like computations, recursive orthogonalization, etc. Numerical stability and conditioning 24/25

31 Thank You Thank you for your attention! Questions? 25/25

32 References I B. De Moor. Structured Total Least Squares and L 2 approximation problems. Lin. Alg. Appl., : , B. De Moor. Total Least Squares for affinely structured matrices and the noisy realization problem. IEEE Trans. Signal Process., 42(11): , K. R. Gabriel and S. Zamir. Lower rank approximation of matrices by Least Squares with any choice of weights. Technometrics, 21, No. 4, November G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, MD, USA, third edition, E. P. Jiang and M. W. Berry. Information filtering using the Riemannian SVD (R-SVD). In Proc. 5th Int. Symp. Solving Irregul. Struct. Probl. Parallel, pages , I. Markovsky. Structured low-rank approximation and its applications. Automatica, 44: , 2008.

33 References II T. S. Motzkin, H. Raiffa, G. L. Thompson, and R. M. Thrall. The double description method. In Contributions to the theory of games, vol. 2, Annals of Mathematics Studies, no. 28, pages Princeton University Press, Princeton, N. J., N. Srebro and T. Jaakkola. Weighted low-rank approximations. Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), Wachington DC, H. J. Stetter. Numerical Polynomial Algebra. SIAM, S. Van Huffel and J. Vandewalle. The Total Least Squares Problem: Computational Aspects and Analysis, volume 9 of Frontiers in Applied Mathematics. SIAM, Philadelphia, 1991.

Solving Systems of Polynomial Equations: Algebraic Geometry, Linear Algebra, and Tensors?

Solving Systems of Polynomial Equations: Algebraic Geometry, Linear Algebra, and Tensors? Solving Systems of Polynomial Equations: Algebraic Geometry, Linear Algebra, and Tensors? Philippe Dreesen Bart De Moor Katholieke Universiteit Leuven ESAT/SCD Workshop on Tensor Decompositions and Applications

More information

Back to the Roots: Solving Polynomial Systems with Numerical Linear Algebra Tools

Back to the Roots: Solving Polynomial Systems with Numerical Linear Algebra Tools Back to the Roots: Solving Polynomial Systems with Numerical Linear Algebra Tools Bart De Moor Katholieke Universiteit Leuven Department of Electrical Engineering ESAT/SCD 1 / 56 Outline 1 Introduction

More information

BACK TO THE ROOTS POLYNOMIAL SYSTEM SOLVING USING LINEAR ALGEBRA AND SYSTEM THEORY

BACK TO THE ROOTS POLYNOMIAL SYSTEM SOLVING USING LINEAR ALGEBRA AND SYSTEM THEORY BACK TO THE ROOTS POLYNOMIAL SYSTEM SOLVING USING LINEAR ALGEBRA AND SYSTEM THEORY PHILIPPE DREESEN, KIM BATSELIER, AND BART DE MOOR Abstract. We return to the algebraic roots of the problem of finding

More information

Maximum Likelihood Estimation and Polynomial System Solving

Maximum Likelihood Estimation and Polynomial System Solving Maximum Likelihood Estimation and Polynomial System Solving Kim Batselier Philippe Dreesen Bart De Moor Department of Electrical Engineering (ESAT), SCD, Katholieke Universiteit Leuven /IBBT-KULeuven Future

More information

Joint Regression and Linear Combination of Time Series for Optimal Prediction

Joint Regression and Linear Combination of Time Series for Optimal Prediction Joint Regression and Linear Combination of Time Series for Optimal Prediction Dries Geebelen 1, Kim Batselier 1, Philippe Dreesen 1, Marco Signoretto 1, Johan Suykens 1, Bart De Moor 1, Joos Vandewalle

More information

On Weighted Structured Total Least Squares

On Weighted Structured Total Least Squares On Weighted Structured Total Least Squares Ivan Markovsky and Sabine Van Huffel KU Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium {ivanmarkovsky, sabinevanhuffel}@esatkuleuvenacbe wwwesatkuleuvenacbe/~imarkovs

More information

A Novel Linear Algebra Method for the Determination of Periodic Steady States of Nonlinear Oscillators

A Novel Linear Algebra Method for the Determination of Periodic Steady States of Nonlinear Oscillators A Novel Linear Algebra Method for the Determination of Periodic Steady States of Nonlinear Oscillators Abstract Periodic steady-state (PSS) analysis of nonlinear oscillators has always been a challenging

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR Abstract. Multivariate polynomials are usually discussed in the framework of algebraic

More information

Key words. Macaulay matrix, multivariate polynomials, multiple roots, elimination, principal angles, radical ideal

Key words. Macaulay matrix, multivariate polynomials, multiple roots, elimination, principal angles, radical ideal SVD-BASED REMOVAL OF THE MULTIPLICITIES OF ALL ROOTS OF A MULTIVARIATE POLYNOMIAL SYSTEM KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR Abstract. In this article we present a numerical SVD-based algorithm

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Structured weighted low rank approximation 1

Structured weighted low rank approximation 1 Departement Elektrotechniek ESAT-SISTA/TR 03-04 Structured weighted low rank approximation 1 Mieke Schuermans, Philippe Lemmerling and Sabine Van Huffel 2 January 2003 Accepted for publication in Numerical

More information

https://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Exercise Sheet 1 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Show that an infinite mixture of Gaussian distributions, with Gamma distributions as mixing weights in the following

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Bhaskar Ramasubramanian 1, Swanand R. Khare and Madhu N. Belur 3 December 17, 014 1 Department of Electrical

More information

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation.

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation. 1 A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation João Carvalho, DMPA, Universidade Federal do RS, Brasil Karabi Datta, Dep MSc, Northern Illinois University, DeKalb, IL

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES John Lipor Laura Balzano University of Michigan, Ann Arbor Department of Electrical and Computer Engineering {lipor,girasole}@umich.edu ABSTRACT This paper

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

A concise proof of Kruskal s theorem on tensor decomposition

A concise proof of Kruskal s theorem on tensor decomposition A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Linear Algebra Fundamentals

Linear Algebra Fundamentals Linear Algebra Fundamentals It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix. Because they form the foundation on which we later

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Exponential Decomposition and Hankel Matrix

Exponential Decomposition and Hankel Matrix Exponential Decomposition and Hankel Matrix Franklin T Luk Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong luk@csecuhkeduhk Sanzheng Qiao Department

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Fast Transforms: Banded Matrices with Banded Inverses

Fast Transforms: Banded Matrices with Banded Inverses Fast Transforms: Banded Matrices with Banded Inverses 1. Introduction Gilbert Strang, MIT An invertible transform y = Ax expresses the vector x in a new basis. The inverse transform x = A 1 y reconstructs

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

12. Cholesky factorization

12. Cholesky factorization L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Singular value decomposition of complexes

Singular value decomposition of complexes Singular value decomposition of complexes Danielle A. Brake, Jonathan D. Hauenstein, Frank-Olaf Schreyer, Andrew J. Sommese, and Michael E. Stillman April 24, 2018 Abstract Singular value decompositions

More information

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Block-row Hankel Weighted Low Rank Approximation 1

Block-row Hankel Weighted Low Rank Approximation 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 03-105 Block-row Hankel Weighted Low Rank Approximation 1 Mieke Schuermans, Philippe Lemmerling and Sabine Van Huffel 2 July 2003

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials Spec Matrices 2017; 5:202 224 Research Article Open Access Dimitrios Christou, Marilena Mitrouli*, and Dimitrios Triantafyllou Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Optimization on the Grassmann manifold: a case study

Optimization on the Grassmann manifold: a case study Optimization on the Grassmann manifold: a case study Konstantin Usevich and Ivan Markovsky Department ELEC, Vrije Universiteit Brussel 28 March 2013 32nd Benelux Meeting on Systems and Control, Houffalize,

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

MATH 196, SECTION 57 (VIPUL NAIK)

MATH 196, SECTION 57 (VIPUL NAIK) TAKE-HOME CLASS QUIZ: DUE MONDAY NOVEMBER 25: SUBSPACE, BASIS, DIMENSION, AND ABSTRACT SPACES: APPLICATIONS TO CALCULUS MATH 196, SECTION 57 (VIPUL NAIK) Your name (print clearly in capital letters): PLEASE

More information

A geometrical approach to finding multivariate approximate LCMs and GCDs

A geometrical approach to finding multivariate approximate LCMs and GCDs A geometrical approach to finding multivariate approximate LCMs and GCDs Kim Batselier, Philippe Dreesen, Bart De Moor 1 Department of Electrical Engineering, ESAT-SCD, KU Leuven / IBBT Future Health Department

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Moore-Penrose s inverse and solutions of linear systems

Moore-Penrose s inverse and solutions of linear systems Available online at www.worldscientificnews.com WSN 101 (2018) 246-252 EISSN 2392-2192 SHORT COMMUNICATION Moore-Penrose s inverse and solutions of linear systems J. López-Bonilla*, R. López-Vázquez, S.

More information

Sparse polynomial interpolation in Chebyshev bases

Sparse polynomial interpolation in Chebyshev bases Sparse polynomial interpolation in Chebyshev bases Daniel Potts Manfred Tasche We study the problem of reconstructing a sparse polynomial in a basis of Chebyshev polynomials (Chebyshev basis in short)

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am ECS130 Scientific Computing Lecture 1: Introduction Monday, January 7, 10:00 10:50 am About Course: ECS130 Scientific Computing Professor: Zhaojun Bai Webpage: http://web.cs.ucdavis.edu/~bai/ecs130/ Today

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

More information

Algebra Homework, Edition 2 9 September 2010

Algebra Homework, Edition 2 9 September 2010 Algebra Homework, Edition 2 9 September 2010 Problem 6. (1) Let I and J be ideals of a commutative ring R with I + J = R. Prove that IJ = I J. (2) Let I, J, and K be ideals of a principal ideal domain.

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information