Inverse Singular Value Problems

Similar documents
Chapter 3 Parameterized Inverse Eigenvalue Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Conjugate Gradient (CG) Method

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Linear Algebra. Session 12

The Singular Value Decomposition

Basic Math for

Principal Component Analysis

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Singular Value Decomposition

Numerical Algorithms as Dynamical Systems

IV. Matrix Approximation using Least-Squares

Chapter 8 Structured Low Rank Approximation

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

Lecture notes: Applied linear algebra Part 1. Version 2

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

2. Review of Linear Algebra

10-725/36-725: Convex Optimization Prerequisite Topics

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

CSC 576: Linear System

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Summary of Week 9 B = then A A =

There are two things that are particularly nice about the first basis

Lecture 1: Systems of linear equations and their solutions

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Linear Least Squares. Using SVD Decomposition.

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

COMP 558 lecture 18 Nov. 15, 2010

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Lecture 2: Linear Algebra Review

ECE 275A Homework #3 Solutions

Linear Least-Squares Data Fitting

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Singular Value Decomposition

EE731 Lecture Notes: Matrix Computations for Signal Processing

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Linear Algebra Review. Vectors

PETROV-GALERKIN METHODS

MIT Final Exam Solutions, Spring 2017

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

The Singular Value Decomposition

DS-GA 1002 Lecture notes 10 November 23, Linear models

1 Non-negative Matrix Factorization (NMF)

Linear Systems. Carlo Tomasi

Jeffrey D. Ullman Stanford University

Computational math: Assignment 1

Positive Definite Matrix

A Brief Outline of Math 355

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Solution of Linear Equations

Roundoff Error. Monday, August 29, 11

LinGloss. A glossary of linear algebra

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION. 2 The gradient projection algorithm

Notes on Eigenvalues, Singular Values and QR

B553 Lecture 5: Matrix Algebra Review

The Singular Value Decomposition

Notes on Householder QR Factorization

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Maths for Signals and Systems Linear Algebra in Engineering

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2

Linear Algebra Primer

arxiv: v1 [cs.lg] 26 Jul 2017

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Lecture 8: Linear Algebra Background

Singular Value Decomposition

Convex Optimization. Problem set 2. Due Monday April 26th

Orthonormal Transformations and Least Squares

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Algebra Massoud Malek

Matrix stabilization using differential equations.

Inverse Eigenvalue Problems: Theory and Applications

Singular Value Decomposition

Computational Methods. Least Squares Approximation/Optimization

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

MATHEMATICS 217 NOTES

10. Smooth Varieties. 82 Andreas Gathmann

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

The QR Factorization

Lecture Notes 2: Matrices

Least Squares Optimization

ECE 275A Homework # 3 Due Thursday 10/27/2016

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

JACOBI S ITERATION METHOD

Multiple eigenvalues

STA141C: Big Data & High Performance Statistical Computing

Using SVD to Recommend Movies

Stat 159/259: Linear Algebra Notes

Although the rich theory of dierential equations can often be put to use, the dynamics of many of the proposed dierential systems are not completely u

Lecture 5 : Projections

Transcription:

Chapter 8 Inverse Singular Value Problems IEP versus ISVP Existence question A continuous approach An iterative method for the IEP An iterative method for the ISVP 139

140 Lecture 8 IEP versus ISVP Inverse Eigenvalue Problem (IEP): Given Symmetric matrices A 0,A 1,..., A n R n n ; Real numbers λ 1... λ n, Find Values of c := (c 1,...,c n ) T R n Eigenvalues of the matrix A(c) :=A 0 +c 1 A 1 +...+c n A n are precisely λ 1,...,λ n.

Inverse Singular Value Problems 141 Inverse Singular Value Problem ISVP: Given General matrices B 0,B 1,...,B n R m n,m n; Nonnegative real numbers σ1... σn, Find Values of c := (c 1,...,c n ) T R n Singular values of the matrix B(c) :=B 0 +c 1 B 1 +...+c n B n are precisely σ 1,...,σ n.

142 Lecture 8 Existence Question Not always does the IEP have a solution. Inverse Toeplitz Eigenvalue Problem (ITEP) A special case of the (IEP) where A 0 =0and A k := (A (k) ij ) with A (k) ij := 1, if i j = k 1; 0, otherwise. Symmetric Toeplitz matrices can have arbitrary real spectra. (Landau 94, nonconstructive proof by topological degree argument). Not aware of any result concerning the existence question for ISVP.

Inverse Singular Value Problems 143 Notation O(n) := All orthogonal matrices in R n n ; Σ=(Σ ij ) := A diagonal matrix in R m n Σ ij := σ i, if 1 i = j n; 0, otherwise. M s (Σ) := {UΣV T U O(m),V O(n)} Contains all matrices in R m n whose singular values are precisely σ 1,...,σ n. B:= {B(c) c R n }. Solving the ISVP Finding an intersection of the two sets M s (Σ) and B.

144 Lecture 8 A Continuous Approach Assume B i,b j =δ ij for 1 i j n. B 0,B k =0for1 k n. The projection of X onto the linear subspace spanned by B 1,...,B n : P(X)= n k=1 X, B k B k. The distance from X to the affine subspace B: dist(x, B) = X (B 0 +P(X)). Define, for any U R m m and V R n n, a residual matrix: R(U, V ):=UΣV T (B 0 +P(UΣV T )). Consider the optimization problem: Minimize Subject to F (U, V ):= 1 R(U, V ) 2 2 (U, V ) O(m) O(n).

Inverse Singular Value Problems 145 Compute the Projected Gradient Frobenius inner product on R m m R n n : (A 1,B 1 ),(A 2,B 2 ) := A 1,A 2 + B 1,B 2. The gradient F may be interpreted as the pair of matrices: F (U, V )=(R(U, V )V Σ T,R(U, V ) T UΣ). Tangent space can be split: T (U,V ) (O(m) O(n)) = T U O(m) T V O(n). Projection is easy because: R n n = T V O(n) T V O(n) = V S(n) V S(n)

146 Lecture 8 Project the gradient F (U, V ) onto the tangent space T (U,V ) (O(m) O(n)): g(u, V )= R(U, V )V Σ T U T UΣV T R(U, V ) T 2 R(U, V ) T UΣV T V Σ T U T R(U, V ) V 2 Descent vector field: d(u, V ) = g(u, V ) dt defines a steepest descent flow on the manifold O(m) O(n) for the objective function F (U, V ). U,.

Inverse Singular Value Problems 147 The Differential Equation on M s (Σ) Define X(t) :=U(t)ΣV (t) T. X(t) satisfies the differential system: dx dt = X XT (B 0 + P (X)) (B 0 + P (X)) T X 2 X(B 0 + P (X)) T (B 0 + P (X)) T X X. 2 X(t) moves on the surface M s (Σ) in the steepest descent direction to minimize dist(x(t), B). This is a continuous method for the ISVP.

148 Lecture 8 Remarks No assumption on the multiplicity of singular values is needed. Any tangent vector T (X)toM s (Σ) at a point X M s (Σ) about which a local chart can be defined must be of the form T (X) =XK HX for some skew symmetric matrices H R m m and K R n n.

Inverse Singular Value Problems 149 An Iterative Method for IEP Assume all eigenvalues λ 1,...,λ n are distinct. Consider: The affine subspace The isospectral surface where A := {A(c) c R n }. M e (Λ) := {QΛQ T Q O(n)} Λ:=diag{λ 1,...,λ n}. Any tangent vector T (X)toM e (Λ) at a point X M e (Λ) must be of the form T (X) =XK KX for some skew-symmetric matrix K R n n.

150 Lecture 8 A Classical Newton Method A function f : R R. The scheme: The intercept: x (ν+1) = x (ν) (f (x (ν) )) 1 f(x (ν) ) The new iterate x (ν+1) = The x-intercept of the tangent line of the graph of f from (x (ν),f(x (ν) )). The lifting: (x (ν+1),f(x (ν+1) )) = The natural lift of the intercept along the y-axis to the graph of f from which the next tangent line will begin.

Inverse Singular Value Problems 151 An Analogy of the Newton Method Think of: The surface M e (Λ) as playing the role of the graph of f. The affine subspace A as playing the role of the x-axis. Given X (ν) M e (Λ), There exist a Q (ν) O(n) such that Q (ν)t X (ν) Q (ν) =Λ. The matrix X (ν) + X (ν) K KX (ν) with any skew-symmetric matrix K represents a tangent vector to M e (Λ) emanating from X (ν). Seek an A-intercept A(c (ν+1) ) of such a vector with the affine subspace A. Lift up the point A(c (ν+1) ) Ato a point X (ν+1) M e (Λ).

152 Lecture 8 Find the Intercept Find a skew-symmetric matrix K (ν) and a vector c (ν+1) such that X (ν) + X (ν) K (ν) K (ν) X (ν) = A(c (ν+1 ). Equivalently, find K (ν) such that Λ+Λ K (ν) K (ν) Λ=Q (ν)t A(c (ν+1) )Q (ν). K (ν) := Q (ν)t K (ν) Q (ν) is skew-symmetric. Can find c (ν) and K (ν) separately.

Inverse Singular Value Problems 153 Diagonal elements in the system Known quantities: J (ν) ij J (ν) c (ν+1) = λ b (ν). T Aj q (ν) := q (ν) i i λ := (λ 1,...,λ n) T T A0 q (ν) i, for i, j =1,...,n b (ν) i := q (ν) i, for i =1,...,n q (ν) i = the i-th column of the matrix Q (ν). The vector c (ν+1) can be solved. Off-diagonal elements in the system together with c (ν+1) K (ν) (and, hence, K (ν) ): K (ν) ij T = q(ν) i A(c (ν+1) )q (ν) j, for 1 i<j n. λ i λ j

154 Lecture 8 Find the Lift-up No obvious coordinate axis to follow. Solving the IEP Finding M e (Λ) A. Suppose all the iterations are taking place near a point of intersection. Then Also should have X (ν+1) A(c (ν+1) ). A(c (ν+1) ) e K(ν) X (ν) e K(ν). Replace e K(ν) by the Cayley transform: R := (I + K(ν) K(ν) )(I 2 2 ) 1 e K(ν). Define X (ν+1) := R T X (ν) R M e (Λ). The next iteration is ready to begin.

Inverse Singular Value Problems 155 Remarks Note that X (ν+1) R T e K(ν) A(c (ν+1) )e K(ν) R A(c (ν+1) ) represents a lifting of the matrix A(c (ν+1) ) from the affine subspace A to the surface M e (Λ). The above offers a geometrical interpretation of Method III developed by Friedland, Nocedal and Overton (SINUM, 1987). Quadratic convergence even for multiple eigenvalues case.

156 Lecture 8 An Iterative Approach for ISVP Assume Matrices B 0,B 1,...,B n are arbitrary. All singular values σ1,...,σ n are positive and distinct. Given X (ν) M s (Σ) There exist U (ν) O(m)andV (ν) O(n)such that U (ν)t X (ν) V (ν) =Σ. Seek a B-intercept B(c (ν+1) )ofalinethatis tangent to the manifold M s (Σ) at X (ν). Lift the matrix B(c (ν+1) ) Bto a point X (ν+1) M s (Σ).

Inverse Singular Value Problems 157 Find the Intercept Find skew-symmetric matrices H (ν) R m m and K (ν) R n n, and a vector c (ν+1) R n such that X (ν) + X (ν) K (ν) H (ν) X (ν) = B(c (ν+1) ) Equivalently, Σ+Σ K (ν) H (ν) Σ=U (ν)t B(c (ν+1) )V (ν) Underdetermined skew-symmetric matrices: H (ν) := U (ν)t H (ν) U (ν), K (ν) := V (ν)t K (ν) V (ν). Can determine c (ν+1), H (ν) and K (ν) separately.

158 Lecture 8 Totally m(m 1) 2 + n(n 1) 2 +n unknowns the vector c (ν+1) and the skew matrices H (ν) and K (ν). Only mn equations. Observe: H(ν) ij, n +1 i j m, (m n)(m n 1) 2 unknowns. Locate at the lower right corner of H(ν). Are not bound to any equations at all. Set H (ν) ij =0forn+1 i j m. Denote W (ν) := U (ν)t B(c (ν+1) )V (ν). Then W (ν) ij =Σ ij +Σ ii K(ν) ij H (ν) ij Σ jj,

Inverse Singular Value Problems 159 Determine c (ν+1) For 1 i = j n, Know quantities: J (ν) st J (ν) c (ν+1) = σ b (ν) := u (ν) T s Bt v s (ν), for 1 s, t n, σ := (σ1,...,σ n) T, b s (ν) u (ν) v (ν) := u (ν) s T B0 v (ν) s, for 1 s n. s = column vectors of U (ν), s = column vectors of V (ν). The vector c (ν+1) is obtained. c (ν+1) W (ν).

160 Lecture 8 Determine H (ν) and K (ν) For n +1 i mand 1 j n, H (ν) ij = H (ν) ji = W(ν) ij. σj For 1 i<j n, W (ν) ij W (ν) ji Solving for H (ν) ij H (ν) ij K (ν) ij = H (ν) ji = =Σ ii K(ν) ij =Σ jj K (ν) ji = Σ jj K(ν) ij and K (ν) ij K (ν) ji H (ν) ij Σ jj, H (ν) ji Σ ii + H (ν) ij Σ ii. = σ i W (ν) ji + σjw (ν) ij, (σi ) 2 (σj) 2 = σ i W (ν) ij + σjw (ν) ji. (σi ) 2 (σj) 2 The intercept is now completely found.

Inverse Singular Value Problems 161 Find the Lift-Up Define orthogonal matrices R := (I + H(ν) H(ν) )(I 2 2 ) 1, S := (I + K(ν) K(ν) )((I 2 2 ) 1. Define the lifted matrix on M s (Σ): Observe and X (ν+1) := R T X (ν) S. X (ν+1) R T (e H(ν) B(c (ν+1) )e K(ν) )S R T e H(ν) I m e K(ν) S I n, if H (ν) and K (ν) are small.

162 Lecture 8 For computation, Only need orthogonal matrices U (ν+1) := R T U (ν) V (ν+1) := S T V (ν). Does not need to form X (ν+1) explicitly.

Inverse Singular Value Problems 163 Quadratic Convergence Measure the discrepancy between (U (ν),v (ν) ) R m m R n n in the induced Frobenius norm. Observe: Suppose: The ISVP has an exact solution at c. SVD of B(c )=ÛΣˆV T. Define error matrix:: E := (E 1,E 2 ):=(U Û,V ˆV). If UÛ T = e H and V ˆV T = e K,then UÛ T =(E 1 +Û)Û T =I m +E 1 Û T =e H =I m +H+O( H 2 ). and a similar expression for V ˆV T. Thus, (H, K) = O( E ).

164 Lecture 8 At the ν-th stage, define E (ν) := (E (ν) 1,E (ν) 2 )=(U (ν) Û,V (ν) ˆV). How far is U (ν)t B(c )V (ν) away from Σ? Write with Then U (ν)t B(c )V (ν) := e Ĥ(ν) Σe ˆK (ν) := (U (ν)t e H(ν) U (ν) )Σ(V (ν)t e K(ν) V (ν) ) H (ν) := U (ν) Ĥ (ν) U (ν)t, K (ν) := V (ν) ˆK(ν) V (ν)t, e H(ν) = U (ν) Û T,e K(ν) =V (ν)ˆv T. So (H (ν),k (ν) ) = O( E (ν) ). Norm invariance under orthogonal transformations (Ĥ(ν), ˆK (ν) ) = O( E (ν) ).

Inverse Singular Value Problems 165 Rewrite U (ν)t B(c )V (ν) =Σ+ΣˆK (ν) Ĥ (ν) Σ+O( E (ν) 2 ). Compare: U (ν)t (B(c ) B(c (ν+1) ))V (ν) =Σ(ˆK (ν) K (ν) ) (Ĥ(ν) H (ν) )Σ + O( E (ν) 2 ). Diagonal elements Thus J (ν) (c c (ν+1) )=O( E (ν) 2 ). Off-diagonal elements Therefore, Together, c c (ν+1) = O( E (ν) 2 ). Ĥ(ν) H (ν) = O( E (ν) 2 ), ˆK (ν) K (ν) = O( E (ν) 2 ). ( H (ν), K (ν) ) = O( E (ν) ). H (ν) H (ν) = O( E (ν) 2 ), K (ν) K (ν) = O( E (ν) 2 ).

166 Lecture 8 Observe: E (ν+1) 1 := U (ν+1) Û = RT U (ν) e H(ν) = (I H(ν) 2 ) (I H(ν) (I + H(ν) 2 ) (I + H(ν) 2 ) 1 U (ν) = [ H (ν) U (ν) + O( H (ν) 2 ) H (ν) + O( H (ν) H (ν) + H (ν) 2 ) ] (I + H(ν) 2 ) 1 U (ν). It is clear now that E (ν+1) 1 = O( E (ν) 2 ). A similar argument works for E (ν+1) 2. We have proved that E (ν+1) = O( E (ν) 2 ).

Inverse Singular Value Problems 167 Multiple Singular Values Previous definition in finding the B-intercept of a tangent line of M s (Σ) allows No zero singular values. No multiple singular values. Now assume All singular values are positive. Only the first singular value σ1 is multiple, with multiplicity p.

168 Lecture 8 Observe: All formulas work, except For 1 i<j p, only know W (ν) ij can be deter- No values for H(ν) ij mined. + W (ν) ji =0. and K (ν) ij Additional q := p(p 1) 2 equations for the vector c (ν+1) arise. Multiple singular values gives rise to an overdetermined system for c (ν+1). Tangent lines from M s (Σ) may not intercept the affine subspace B at all. The ISVP needs to be modified.

Inverse Singular Value Problems 169 Modified ISVP Given Positive values σ 1 =... = σ p >σ p+1 >... > σ n q, Find Real values of c 1,...,c n, The n q largest singular values of the matrix matrix B(c) are σ1,...,σ n q.

170 Lecture 8 Find the Intercept Use the equation ˆΣ+ˆΣ K (ν) H (ν)ˆσ=u (ν) T B(c (ν+1 )V (ν) to find the B-intercept where The diagonal matrix ˆΣ :=diag{σ1,...,σ n q,ˆσ n q+1,...,ˆσ n } Additional singular values ˆσ n q+1,...,ˆσ n are free parameters.

Inverse Singular Value Problems 171 The Algorithm Given U (ν) O(m)andV (ν) O(n), Solve for c (ν+1) from the system of equations: n u (ν) T i Bk v (ν) i c (ν+1 k k=1 Define ˆσ (ν) for i =1,...,n q n u (ν) T s Bk v t (ν) k=1 u s (ν) T B0 v t (ν) ˆσ (ν) k := k + u (ν) t u (ν) T t B0 v s (ν), for 1 s<t p. by = σi u (ν) T i B0 v (ν) i, T Bk v s (ν) c (ν+1) k = σk, if 1 k n q; T B(c (ν+1) )v (ν), if n q<k n u (ν) k Once c (ν+1) is determined, calculate W (ν). k

172 Lecture 8 Define skew symmetric matrices K (ν) and H (ν) : For 1 i<j p, the equation to be satisfied is W (ν) ij =ˆσ (ν) i K (ν) (ν) ij H ij ˆσ (ν) j. K (ν) ij Many ways to define and Set K (ν) ij 0for1 i<j p. K (ν) is defined by K (ν) ij := ˆσ (ν) i W (ν) ij H (ν) is defined by H (ν) ij := +ˆσ(ν) j W (ν) ji (ν) H ij., if 1 i<j n; p<j; (ˆσ (ν) i ) 2 (ˆσ (ν) j ) 2 0, if 1 i<j p W (ν) ij, if 1 i<j p; ˆσ (ν) j W(ν) ij, if n +1 i m;1 j n; ˆσ (ν) j ˆσ (ν) i W (ν) ji +ˆσ (ν) j (ˆσ (ν) i ) 2 (ˆσ (ν) W (ν) ij, if 1 i<j n; p<j; j ) 2 0, if n +1 i j m. Once H (ν) and K (ν) are determined, proceed the lifting in the same way as for the ISVP.

Inverse Singular Value Problems 173 Remarks No longer on a fixed manifold M s (Σ) since ˆΣ is changed per step. The algorithm for multiple singular value case converges quadratically.

174 Lecture 8 Zero Singular Value Zero singular value rank deficiency. Finding a lower rank matrix in a generic affine subspace B is intuitively a more difficult problem. More likely the ISVP does not have a solution. Consider the simplest case where σ 1 >...>σ n 1> σ n=0. Except for H in (and H ni ), i = n +1,...,m, all other quantities including c (ν+1) are well-defined. It is necessary that W (ν) in =0fori=n+1,...,m. If the necessary condition fails, then no tangent line of M s (Σ) from the current iterate X (ν) will intersect the affine subspace B.

Inverse Singular Value Problems 175 Example of the Continuous Approach Integrator Subroutine ODE (Shampine et al, 75). ABSERR and RELERR = 10 12. Output values examined at interval of 10. Two consecutive output points differ by less than 10 10 Convergence. Stable equilibrium point is not necessarily a solution to the ISVP. Change to different initial value X(0) if necessary.

176 Lecture 8 Example of the Iterative Approach Easy implementation by MATLAB. Consider the case when m =5andn=4. Randomly generated basis matrices by the Gaussian distribution. Numerical experiment meant solely to examine the quadratic convergence. Randomly generate a vector c # R 4. Singular values of B(c # ) used as the prescribed singular values. Perturb each entry of c # by a uniform distribution between 1and1. Use the perturbed vector as the initial guess.

Inverse Singular Value Problems 177 Observations The limit point c is not necessary the same as the original vector c #. Singular values of B(c ) do agree with those of B(c # ). Differences between singular values of B(c (ν) )and B(c ) are measured in the 2-norm. Quadratic convergence is observed.

178 Lecture 8 Example of Multiple Singular Values Construction of an example is not trivial. Same basis matrices as before. Assume p =2. Prescribed singular values σ =(5,5,2) T. Initial guess of c (0) is searched by trials The order of singular values could be altered. The value 5 is no longer the largest singular value. Unless the initial guess c (0) is close enough to an exact solution c, no reason to expect that the algorithm will preserve the ordering. Once convergence occurs, then σ must be part of the singular values of the final matrix. At the initial stage the convergence is slow, but eventually the rate is picked up and becomes quadratic.