Least squares Solution of Homogeneous Equations

Similar documents
3D Computer Vision - WT 2004

Image preprocessing in spatial domain

Foundations of Computer Vision

Image preprocessing in spatial domain

Image preprocessing in spatial domain

1 Linearity and Linear Systems

Evaluation, transformation, and parameterization of epipolar conics

Review of similarity transformation and Singular Value Decomposition

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Linear Algebra - Part II

Linear Algebra Review. Fei-Fei Li

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Jordan Normal Form and Singular Decomposition

Singular value decomposition

The Singular Value Decomposition

1 Singular Value Decomposition and Principal Component

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra Review. Fei-Fei Li

EECS 275 Matrix Computation

Summary of Week 9 B = then A A =

Image Compression Using Singular Value Decomposition

Econ Slides from Lecture 7

Solution of Linear Equations

SVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer

DIAGONALIZATION OF THE STRESS TENSOR

Introduction to pinhole cameras

CS47300: Web Information Search and Management

A summary of matrices and matrix math

Total Least Squares Approach in Regression Methods

Linear Algebra and Matrices

COMP 558 lecture 18 Nov. 15, 2010

Linear Algebra and Eigenproblems

Maximum variance formulation

Lecture: Face Recognition and Feature Reduction

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Fitting functions to data

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Review of Linear Algebra

Expectation Maximization

Linear Systems. Carlo Tomasi

1. The Polar Decomposition

UNIT 6: The singular value decomposition.

Lecture: Face Recognition and Feature Reduction

Characterization of half-radial matrices

Background Mathematics (2/2) 1. David Barber

Algorithms for Computing a Planar Homography from Conics in Correspondence

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Homework 1. Yuan Yao. September 18, 2011

Linear Algebra Methods for Data Mining

Chapter 7: Symmetric Matrices and Quadratic Forms

Linear Algebra (Review) Volker Tresp 2018

Latent Semantic Analysis (Tutorial)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

EE731 Lecture Notes: Matrix Computations for Signal Processing

A Study of Kruppa s Equation for Camera Self-calibration

SIO 211B, Rudnick, adapted from Davis 1

THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n. Christopher S. Withers and Saralees Nadarajah (Received July 2008)

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Chapter 2 Canonical Correlation Analysis

Unit 5: Matrix diagonalization

Instructions For Use: Add, Divide, Repeat

AMS526: Numerical Analysis I (Numerical Linear Algebra)

12.4 Known Channel (Water-Filling Solution)

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]

Singular Value Decomposition

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery

Singular Value Decompsition

Principal Component Analysis

The SVD-Fundamental Theorem of Linear Algebra

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

The Eigenvalue Problem: Perturbation Theory

Linear Least Squares. Using SVD Decomposition.

Introduction to SVD and Applications

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

Multivariate Statistical Analysis

MAC Module 12 Eigenvalues and Eigenvectors

Extraction of Individual Tracks from Polyphonic Music

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method

Eigenvalue problems and optimization

Algorithmen zur digitalen Bildverarbeitung I

Linear Algebra- Final Exam Review

EE16B Designing Information Devices and Systems II

Least Squares Optimization

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

Linear Algebra. Rekha Santhanam. April 3, Johns Hopkins Univ. Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, / 7

Normalized power iterations for the computation of SVD

Linear Subspace Models

Matrix decompositions

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

EECS 275 Matrix Computation

Dimensionality Reduction

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

14 Singular Value Decomposition

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

Singular Value Decomposition (SVD)

Transcription:

Least squares Solution of Homogeneous Equations supportive text for teaching purposes Revision: 1.2, dated: December 15, 2005 Tomáš Svoboda Czech Technical University, Faculty of Electrical Engineering Center for Machine Perception, Prague, Czech Republic svoboda@cmp.felk.cvut.cz http://cmp.felk.cvut.cz/~svoboda

Introduction We want to find a n 1 vector h satisfying 2/5 Ah = 0, where A is m n matrix, and 0 is n 1 zero vector. Assume m n, and rank(a) = n. We are obviously not interested in the trivial solution h = 0 hence, we add the constraint h = 1. Constrained least squares minimization: Find h that minimizes Ah subject to h = 1.

Derivation I Lagrange multipliers h = argminh Ah subject to h = 1. We rewrite the constraint as 1 h h = 0 3/5

Derivation I Lagrange multipliers h = argminh Ah subject to h = 1. We rewrite the constraint as 1 h h = 0 3/5 To find an extreme (the sought h) we must solve h ( h A Ah + λ(1 h h) ) = 0.

Derivation I Lagrange multipliers h = argminh Ah subject to h = 1. We rewrite the constraint as 1 h h = 0 3/5 To find an extreme (the sought h) we must solve h ( h A Ah + λ(1 h h) ) = 0. We derive: 2A Ah 2λh = 0.

Derivation I Lagrange multipliers h = argminh Ah subject to h = 1. We rewrite the constraint as 1 h h = 0 3/5 To find an extreme (the sought h) we must solve h ( h A Ah + λ(1 h h) ) = 0. We derive: 2A Ah 2λh = 0. After some manipulation we end up with: (A A λe)h = 0 which is the characteristic equation. Hence, we know that h is an eigenvector of (A A) and λ is an eigenvalue.

Derivation I Lagrange multipliers h = argminh Ah subject to h = 1. We rewrite the constraint as 1 h h = 0 3/5 To find an extreme (the sought h) we must solve h ( h A Ah + λ(1 h h) ) = 0. We derive: 2A Ah 2λh = 0. After some manipulation we end up with: (A A λe)h = 0 which is the characteristic equation. Hence, we know that h is an eigenvector of (A A) and λ is an eigenvalue. The least-squares error is e = h A Ah = h λh.

Derivation I Lagrange multipliers h = argminh Ah subject to h = 1. We rewrite the constraint as 1 h h = 0 3/5 To find an extreme (the sought h) we must solve h ( h A Ah + λ(1 h h) ) = 0. We derive: 2A Ah 2λh = 0. After some manipulation we end up with: (A A λe)h = 0 which is the characteristic equation. Hence, we know that h is an eigenvector of (A A) and λ is an eigenvalue. The least-squares error is e = h A Ah = h λh. The error will be minimal for λ = mini λi and the sought solution is then the eigenvector of the matrix (A A) corresponding to the smallest eigenvalue.

Derivation II SVD Let A = USV, where U is m n orthonormal, S is n n diagonal with descending order, and V is n n also orthonormal. 4/5

Derivation II SVD Let A = USV, where U is m n orthonormal, S is n n diagonal with descending order, and V is n n also orthonormal. From orthonormality of U, V follows that USV h = SV h and V h = h. 4/5

Derivation II SVD Let A = USV, where U is m n orthonormal, S is n n diagonal with descending order, and V is n n also orthonormal. From orthonormality of U, V follows that USV h = SV h and V h = h. Substitute y = V h. Now, we minimize Sy subject to y = 1. 4/5

Derivation II SVD Let A = USV, where U is m n orthonormal, S is n n diagonal with descending order, and V is n n also orthonormal. From orthonormality of U, V follows that USV h = SV h and V h = h. Substitute y = V h. Now, we minimize Sy subject to y = 1. Remember that S is diagonal and the elements are sorted descendently. Than, it is clear that y = [0, 0,..., 1]. 4/5

Derivation II SVD Let A = USV, where U is m n orthonormal, S is n n diagonal with descending order, and V is n n also orthonormal. From orthonormality of U, V follows that USV h = SV h and V h = h. Substitute y = V h. Now, we minimize Sy subject to y = 1. Remember that S is diagonal and the elements are sorted descendently. Than, it is clear that y = [0, 0,..., 1]. From substitution we know that h = Vy from which follows that sought h is the last column of the matrix V. 4/5

Further reading Richard Hartley and Andrew Zisserman, Multiple View Geometry in computer vision, Cambridge University Press, 2003 (2nd edition), [Appendix A5] 5/5 Gene H. Golub and Charles F. Van Loan, Matrix Computation, John Hopkins University Press, 1996 (3rd edition). Eric W. Weisstein. Lagrange Multiplier. From MathWorld A Wolfram Web Resource. http://mathworld.wolfram.com/lagrangemultiplier.html Eric W. Weisstein. Singular Value Decomposition. From MathWorld A Wolfram Web Resource. http://mathworld.wolfram.com/singularvaluedecomposition.html