Although the rich theory of dierential equations can often be put to use, the dynamics of many of the proposed dierential systems are not completely u

Similar documents
Numerical Algorithms as Dynamical Systems

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Inverse Singular Value Problems

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

S.F. Xu (Department of Mathematics, Peking University, Beijing)

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

Linear Algebra and Eigenproblems

as specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo

Inverse Eigenvalue Problems: Theory and Applications

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

How long does it take to compute the eigenvalues of a random symmetric matrix?

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

Spectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011

Inverse Eigenvalue Problems in Wireless Communications

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

Abstract. The weighted orthogonal Procrustes problem, an important class of data matching problems in multivariate data analysis, is reconsidered in t

Improved Newton s method with exact line searches to solve quadratic matrix equation

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Numerical Methods in Matrix Computations

Algebra C Numerical Linear Algebra Sample Exam Problems

Recurrence Relations and Fast Algorithms

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

OF INVERSE EIGENVALUE PROBLEMS. XUZHOU CHEN AND MOODY T. CHU y

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Finding eigenvalues for matrices acting on subspaces

Preliminary Examination, Numerical Analysis, August 2016

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

A fast randomized algorithm for overdetermined linear least-squares regression

Solution of Linear Equations

Numerical Methods - Numerical Linear Algebra

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

Iterative Methods. Splitting Methods

Characterization of half-radial matrices

Inverse Eigenvalue Problems: Theory, Algorithms, and Applications

APPLIED NUMERICAL LINEAR ALGEBRA

Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul,

Linear Algebra Massoud Malek

Baltzer Journals April 24, Bounds for the Trace of the Inverse and the Determinant. of Symmetric Positive Denite Matrices

Numerical Methods for Solving Large Scale Eigenvalue Problems

Linear Methods in Data Mining

Chapter 8 Integral Operators

Lecture 2: Review of Prerequisites. Table of contents

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

Conjugate Gradient (CG) Method

Eigenvalue Problems and Singular Value Decomposition

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Numerical Methods I Non-Square and Sparse Linear Systems

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

Constrained Leja points and the numerical solution of the constrained energy problem

Automatica, 33(9): , September 1997.

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

X. Hu, R. Shonkwiler, and M.C. Spruill. School of Mathematics. Georgia Institute of Technology. Atlanta, GA 30332

Matrix decompositions

CAAM 454/554: Stationary Iterative Methods

Problems in Linear Algebra and Representation Theory

Eigenvalue problems and optimization

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra: Matrix Eigenvalue Problems

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

The Algebraic Multigrid Projection for Eigenvalue Problems; Backrotations and Multigrid Fixed Points. Sorin Costiner and Shlomo Ta'asan

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Subset selection for matrices

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

An investigation of feasible descent algorithms for estimating the condition number of a matrix

Positive Definite Matrix

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Inverse Eigenvalue Problems for Checkerboard Toeplitz Matrices

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.

MAT 610: Numerical Linear Algebra. James V. Lambers

Convergence of The Multigrid Method With A Wavelet. Abstract. This new coarse grid operator is constructed using the wavelet

Convergence rate estimates for the gradient differential inclusion

From Stationary Methods to Krylov Subspaces

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On Solving Large Algebraic. Riccati Matrix Equations

EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

1 Linear Algebra Problems

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra. Session 12

Stabilization and Acceleration of Algebraic Multigrid Method

arxiv: v1 [math.na] 1 Sep 2018

E.G. KALNINS AND WILLARD MILLER, JR. The notation used for -series and -integrals in this paper follows that of Gasper and Rahman [3].. A generalizati

Transcription:

A List of Matrix Flows with Applications Moody T. Chu Department of Mathematics North Carolina State University Raleigh, North Carolina 7695-805 Abstract Many mathematical problems, such as existence questions, are studied by using an appropriate realization process, either iteratively or continuously. This article is a collection of dierential equations that have been proposed as special continuous realization processes. In some cases, there are remarkable connections between smooth ows and discrete numerical algorithms. In other cases, the ow approach seems advantageous in tackling very dicult problems. The ow approach has potential applications ranging from new development of numerical algorithms to the theoretical solution of open problems. Various aspects of the recent development and applications of the ow approach are reviewed in this article. 1 Introduction A realization process, in a broad sense, means any deductive procedure that we use to comprehend and solve problems. In mathematics, especially for existence questions, a realization process often appears in the form of an iterative procedure or a dierential equation. For years researchers have taken great eort to describe, analyze, and modify realization processes. Nowadays the success of this investigation is especially evident in discrete numerical algorithms. On the other hand, the use of dierential equations to issues in computational mathematics has been found recently to aord fundamental insights into the structure and behavior of existing discrete methods and, sometimes, to suggest new and improved numerical methods. This paper reects upon a number of interesting continuous realization processes that have been proposed in the literature. Adopted from dynamical system terminology, each continuous realization process is referred to as a ow. This research was supported in part by National Science Foundation under grants DMS- 9006135 and DMS-913448. 1

Although the rich theory of dierential equations can often be put to use, the dynamics of many of the proposed dierential systems are not completely understood. The true impact on numerical algorithms also needs to be investigated further. The intention of this paper is to compile a list of ows with brief description of possible applications so as to stimulate further interest and advance in the ow approach for realizing a problem. The basic idea of continuous realization methods is to connect two abstract problems by a mathematical bridge. Usually one of the abstract problems is a made-up problem whose solution is trivial while the other is the real problem whose solution is dicult to nd. The bridge usually takes the form of an integral curve for a certain ordinary dierential equation that describes how the problem data, including the answer to the problem, are transformed from the simple system to the more complicated system. This idea will become clear in the next section. Obviously, the most important issue in the ow approach isthe assurance that a bridge connecting the two abstract problems does exist. The construction of a bridge can be motivated in several dierent ways: Sometimes an existing discrete numerical method may beextended directly into a continuous model [3, 8] Sometimes a dierential equation arises naturally from a certain physical principles [40, 44] More often a vector eld is constructed with a specic task in mind [6, 11, 15, 16]. We shall report the material only descriptively. For more extensive discussion, readers should refer to the bibliography. We presenttheows on a case-by-case basis. For brevity, we encapsulate the circumstances under which the discussion is set forth by the following labels: Original Problem: The underlying problem that is to be solved. Discrete Method: Basic schemes of any existing discrete methods. Motivation: Motivation or idea for the construction of a bridge (ow). Flow: The description of the dierential equation. Initial Conditions: The starting point oftheow (The simple system). Special Features: Special features of the ow approach. Example: Examples or applications. Generalization: Possible generalizations or new numerical schemes. List of Flows.1 Linear Stationary Flows Original Problem: Solve the linear equation where A R nn and x, b R n. Ax = b (1)

where Discrete Method: Most linear stationary methods assume the form [9] x k+1 = Gx k + c k =0 1 ::: () G = I ; Q ;1 A c = Q ;1 b and Q is a splitting matrix of A. Motivation: Think of () as one Euler step with unit step size applied to a linear dierential system. Flow: dx = ;Q;1 Ax + c: (3) Initial Conditions: x(0) can be any point inr n. Special Features: For global convergence of (3), only the inequalities < i (G) < 1 (4) for all eigenvalues i of G are needed, which ismuch weaker than the would-be condition for the convergence of (). Generalization: Solving (3) by a numerical method amounts to a new iterative scheme, including highly complicated multistep iterative schemes [5, 8].. Homotopy Flows Original Problem: Solve the nonlinear equation f(x) =0 (5) where f : R n ;! R n is continuously dierentiable. Discrete Method: A classical method is the Newton method [37] x k+1 = x k ; k (f 0 (x k )) ;1 f(x k ): (6) Motivation: At least two ways to motivate the continuous ows: 1. Think of (6) as one Euler step with step size k applied to the dierential equation [3] dx ds = ;(f 0 (x)) ;1 f(x): (7) 3

. Connect the system (5) to a trivial system by, for example, H(x t) =f(x) ; tf(x 0 ) (8) where x 0 is an arbitrarily xed point in R n [1, ]. Generically, the zero set H ;1 (0) is a one-dimensional smooth manifold. Flow: Either (7) or f 0 (x) dx ds ; 1 t f(x) ds dx ds ds 0 = 1 : (9) Initial Conditions: For (7), x(0) can be arbitrary. For (9), x(0) = x 0 and t(0) = 1. Special Features: 1. The ow of (7) satises f(x(s)) = e ;s f(x(0)). That is, the ow moves in the direction along which jjf(x)jj is exponentially reduced.. Properties of f and the selection of x 0 must be taken into account in(9) in order that the bridge really makes the desired connection to t =0[1,]. Example: Successful applications with specially formulated homotopy functions include eigenvalue problems [9, 34], nonlinear programming problem [7], physics applications and boundary value problems[38, 44], and polynomial systems [35, 36]..3 Scaled Toda Flows Original Problem: Reduce a square matrix A 0 R nn to a certain canonical form [31], e.g., triangularization, so as to solve the eigenvalue problem Discrete Method: A 0 x = x: (10) 1. For triangularization, use the unshifted QR algorithm: A k = Q k R k =) A k+1 = R k Q k (11) where Q k R k is the QR decomposition of A k orany other QR-type algorithms, e.g., the LU algorithm [8, 45].. For a general non-zero pattern which A 0 is reduced to, no discrete method is available. Flow: 4

1. The Toda ow =[X 0(X)] (1) where [A B] =AB ; BA, 0 (X) =X ; ; X ;T and X ; is the strictly lower triangular part of X.. The scaled Toda ow =[X K X] (13) where K is a constant matrix and represent the Hadamard product. Initial Conditions: X(0) = A 0. Special Features: 1. The time-1 map of the Toda ow is equivalent to the QR algorithm [0,40].. The time-1 map of the scaled Toda ow also enjoys a QR-like algorithm [18]. 3. For symmetric X, K is necessarily skew-symmetric. Asymptotic behavior of (13) is completely known [18]. Motivation: The Toda lattice originates as a description of a one-dimensional lattice of particles with exponential interaction. The connection between the Toda ow and the QR algorithm was discovered by Symes [40]. Example: Dierent choices of the scaling matrix K give rise to dierent isospectral ows, including many already proposed in the literature [1, 18, 1, 41, 4]. In particular, (13) can be used to generate special canonical forms that no other methods can [10]..4 Projected Gradient Flows Original Problem: Let S(n) and O(n) denote, respectively, the subspace of all symmetric matrices and the group of all orthogonal matrices in R nn. Let P (X) denote the projection of X onto a specied ane subspace of S(n). Then Minimize F (X) := 1 kx ; P (X)k Subject to X M(X 0 ) (14) where M(X 0 ):=fx S(n)jX = Q T X 0 Q Q O(n)g and kk is the Frobenius norm. Discrete Method: Depending upon the nature of the projection P,the problem may or may not have a discrete method [11]. The Jacobi method [8], for example, may be applied if P (X) =diag(x). 5

Flow: =[X [X P(X)]]: (15) Motivation: The right hand side of (15) represents the negative of the gradient off on the feasible set M(X 0 ). Initial Conditions: X(0) = X 0. Example: 1. The ow (15) may be employed to solve the least squares approximation problem subject to spectral constraints, the inverse eigenvalue problem and the eigenvalue problem [11]. For the latter, the ow (15) is a continuous analogue of the Jacobi method [4].. The ow (15) generalizes Brockett's double bracket ow [6, 7] which, in turn, has been found to have other applications in sorting, linear programming and total least squares problems [4, 5]. Generalization: The idea of projected gradient ow can be generalized to other types of approximation problems as will be seen below..5 Simultaneous Reduction Flows Original Problem: Simultaneous reduction by two kinds of transformations: 1. Reduction by orthogonal similarity transformations: Given matrices A i R nn, i =1 ::: p, and projection maps P i onto specied subspaces, Minimize F (Q) := 1 px i=1 k i (Q)k Subject to Q O(n) (16) where i (Q) :=Q T A i Q ; P i (Q T A i Q).. Reduction by orthogonal equivalence transformations: Given matrices B i R mn, i =1 ::: p, and projection maps R i, Minimize G(Q Z) := 1 px i=1 k i (Q Z)k Subject to Q O(m) (17) Z O(n) where i (Q Z) :=Q T B i Z ; R i (Q T B i Z). 6

Discrete Method: Very few theoretical results or even numerical methods are available for simultaneous reduction problems [15, 31]. Motivation: Compute the projected gradient of (16) and (17), respectively. Flow: 1. Orthogonal similar ow: i = 4 Xi px j=1 [X j P T j (X j)] ; [X j P T j (X j)] T 3 5 : (18). Orthogonal equivalence ow: dy i px (Yi = Yj T R j(y j ) ; Rj T (Y j )Y j + R j(y j )Yj T ; Y j Rj T (Y j ) ) Y i : j=1 (19) Initial Conditions: X i (0) = A i, Y i (0) = B i. Example: Here is a Jacobi ow for computing singular values of a single matrix: =X XT diag(x);(x T diag(x)) T + diag(x)xt ;(diag(x)x T ) T X: (0) Generalization: Two other related matrix ows (but not derived from projected gradient): 1. SVD ow:. QZ ow: Special Features: = X 0 (XX T ) ; 0 (X T X)X X(0) = A: (1) 1 X 1 (0) = A 1 X (0) = A : = X 1 0 (X ;1 X 1) ; 0 (X 1 X ;1 )X 1 = X 0 (X ;1 X 1) ; 0 (X 1 X ;1 )X () 1. The continuous realization processes (18) or (19) have the advantages that the desired form to which matrices are reduced can be almost arbitrary, and that if a desired form is not attainable then the limit point of the dierential system gives a way of measuring the distance from the best reduced matrices to the nearest matrices that have the desired form. 7

. Just as the Toda lattice (1) models the QR algorithm, the system (1) models the SVD algorithm [14] for the A R mn, and () models the QZ algorithm [13] for the matrix pencil (A 1 A ) R nn R nn..6 Inverse Eigenvalue Flows Original Problem: Given a set of real numbers f 1 ::: n g, consider two kinds of inverse eigenvalue problems: 1. Given A 0 ::: A n S(n) that are mutually orthonormal, nd c =[c 1 ::: c n ] such that A(c) :=A 0 + nx i=1 c i A i (3) has the prescribed set as its spectrum. The special case is where A(c) is atoeplitz matrix which isknown, thus far, to be an open problem [].. Find a symmetric non-negative matrix P that has the prescribed set as its spectrum. Discrete Method: A few locally convergent Newton-like algorithms are available for the rst problem [6, 33]. Little is known for the non-negative matrix problem [3]. Motivation: Minimize the distance between the isospectral surface and the set of matrices of desired form. Flow: 1. Inverse eigenvalue problem [11]: where =[X [X A 0 + P (X)]] (4) P (X) = nx i=1 and < > denotes the Frobenius inner product. <X A i >A i (5). Inverse eigenvalue problem for non-negative matrices [16]: dy = [X [X Y ]] (6) = 4Y (X ; Y ): (7) Initial Conditions: For both (4) and (6), X(0) = diagf 1 ::: n g. For (7), Y (0) can be any non-negative matrix. Generalization: 8

1. For the inverse Toeplitz eigenvalue problem, the descent ow (4) may converge to a stationary point that is not Toeplitz. A new ow that seems to converge globally is [3]. where 8< =[X k(x)] (8) k ij (X) :=: x i+1 j ; x i j;1 if 1 i<j n 0 if 1 i = j n x i j;1 ; x i+1 j if 1 j<in (9). The idea of (4) can be generalized to inverse singular value problem: where = X XT (B 0 + R(X)) ; (B 0 + R(X)) T X (30) ; X(B 0 + R(X)) T ; (B 0 + R(X)) T X X (31) R(X) = nx k=1 <X B k >B k (3) and B 0 B 1 ::: B n R mn are prescribed mutually orthonormal matrices. Recently insights drawn from (30) give rise to new iterative methods [19]..7 Complex Flows Original Problem: Most of the discussion hitherto can be generalized to the complex-valued cases. One such example is the nearest normal matrix problem [30, 39]. Discrete Method: The Jacobi algorithm [39] can be used. Motivation: The nearest normal matrix problem is equivalent to Minimize H(U) := 1 ku AU ; diag(u AU)k Subject to U U(n) (33) where U(n) is the group of all unitary matrices in C nn. Flow: du dw = U [W diag(w )] ; [W diag(w )] (34) = W [W diag(w )] ; [W diag(w )] : (35) 9

Initial Conditions: U(0) = I and W (0) = A. Special Features: The putative nearest normal matrix to A is given by Z := U(1)diag(W(1))U(1) [15]. Generalization: Least square approximation by real normal matrices can also be done by a method described by Chu[17] 3 Conclusion = X [X AT ] ; [X A T ] T : (36) Most matrix dierential equations by nature are complicated, since the components are coupled into nonlinear terms. Nonetheless, as we have demonstrated, there have been substantial advances in understanding some of the dynamics. For the time being, the numerical implementation is still very primitive. But most important ofall, we think there are many opportunities where new algorithms may be developed from the realization process. It is hoped that this paper has conveyed some values of this idea. 10

References [1] E. Allgower, A survey of homotopy methods for smooth mappings, in Numerical Solution of Nonlinear Equations, Lecture Notes in Math., 878, 1980, 1-9. [] E. Allgower and K Georg, Simplicial and continuation methods for approximating xed points and solutions to systems of equations, SIAM Review, (1980), 8-85. [3] A. Berman and R. J. Plemmons, Non-negative Matrices in the Mathematical Sciences, Academic Press, New York, 1979. [4] A. M. Bloch, R. W. Brockett and T. S Ratiu, A new formulation of the generalized Toda lattice equations and their xed point analysis via the momentum map, Bull. Amer. Math. Soc. (N.S.) 3(1990), 477-485. [5] A. M. Bloch, Steepest descent, linear programming, and Hamiltonian ows, Contemporary Math., 114(1990), 77-88. [6] R. W. Brockett, Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems, in Proceedings of the 7th IEEE Conference on Decis ion and Control, IEEE, (1988), 799-803, and Linear Alg. Appl., 146(1991), 79-91. [7] R. W. Brockett, Least squares matching problems, Linear Alg. Appl., 1/13/14(1989), 761-777. [8] M. T. Chu, On the continuous realization of iterative processes, SIAM Review, 30(1988), 375-387. [9] M. T. Chu, T. Y. Li, and T. Sauer, Homotopy method for general -matrix problems, SIAM J. Matrix Anal. Appl., 9(1988), 58-536. [10] M. T. Chu and L. K. Norris, Isospectral ows and abstract matrix factorizations, SIAM J. Numer. Anal., 5(1988), 1383-1391. [11] M. T. Chu and K. R. Driessel, The projected gradient method for least squares matrix approximations with spectral constraints, SIAM J. Numer. Anal., 7(1990), 1050-1060. [1] M. T. Chu, The generalized Toda ow, the QR algorithm and the center manifold theorem, SIAM J. Alg. Disc. Meth., 5(1984), 187-10. [13] M. T. Chu, A continuous approximation to the generalized Schur decomposition, Linear Alg. Appl., 78(1986), 119-13. [14] M. T. Chu, On a dierential equation approach to the singular value decomposition of bi-diagonal matrices, Linear Alg. Appl., 80(1986), 71-79. 11

[15] M. T. Chu, A continuous Jacobi-like approach to the simultaneous reduction of real matrices, Linear Alg. Appl., 147(1991), 75-96. [16] M. T. Chu and K. R. Driessel, Constructing symmetric non-negative matrices with prescribed eigenvalues by dierential equations, SIAM J. Math. Anal., (1991), 137-1387. [17] M. T. Chu, Least squares approximation by real normal matrices with specied spectrum, SIAM J. Matrix. Appl., 1(1991),115-17. [18] M. T. Chu, Scaled Toda-like ows, SIAM J. Numer. Anal., submitted, 199. [19] M. T. Chu, Numerical methods for inverse singular value problems, SIAM J. Numer. Anal., 9(199), 885-903. [0] P. Deift, T. Nanda and C. Tomei, Dierential equations for the symmetric eigenvalue problem, SIAM J. Numer. Anal., 0(1983), 1-. [1] J. Della-Dora, Numerical Linear Algorithms and Group Theory, Linear Alg. Appl., 10(1975), 67-83. [] P. Delsarte and Y. Genin, Spectral properties of nite Toeplitz matrices, Proceedings of the 1983 International Symposium of Mathematical Theory of Networks and Systems, Beer-Sheva, Israel, 194-13. [3] K. R. Driessel and M. T. Chu, Can real symmetric Toeplitz matrices have arbitrary real spectra?, preprint, 1988. [4] K. R. Driessel, On isospectral gradient ow solving matrix eigenproblems using dierential equations, in Inverse Problems, J. R. Cannon and U Hornung, ed., ISNM 77, Birkhauser, 1986, 69-91. [5] T. Eirola and O. Nevanlinna, What do multistep methods approximate? Numer. Math., 53(1988), 559-569. [6] S. Friedland, J. Nocedal and M. L. Overton, The formulation and analysis of numerical methods for inverse eigenvalue problems, SIAM J. Numer. Anal., 4(1987), 634-667. [7] C. B. Garcia and W. I. Zangwill, Pathways to Solutions, Fixed Points, a Equilibria, Prentice-Hall, Englewood Clis, NJ, 1981. [8] G. H. Golub and C. F. Van Loan, Matrix Computations, nd ed., Johns Hopkins, Baltimore, 1989. [9] L. A. Hageman and D. M. Young, Applied Iterative Methods, Academic Press, New York, 1981. 1

[30] N. J. Higham, Matrix nearness problems and applications, Applications of Matrix Theory, S. Barnett and M. J. C. Gover, ed., Oxford University Press, 1989, 1-7. [31] R. A. Horn and C. A. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1991. [3] H. B. Keller, Global homotopies and Newton methods, in Recent Advances Numerical Analysis, C. de Boor and G. Golub, eds., Academic Press, New York, 1978, 73-94. [33] D. P. Laurie, A numerical approach to the inverse Toeplitz eigenproblem SIAM J. Sci. Stat. Comput., 9(1988), 401-405. [34] T. Y. Li and N. H. Rhee, Homotopy algorithm for symmetric eigenvalue problems, Numer. Math., 55(1989), 65-80. [35] T. Y. Li, T. Sauer and J. Yorke, Numerical solution of a class of decient polynomial systems, SIAM J. Numer. Anal., 4(1987), 435-451. [36] A. P. Morgan, Solving Polynomial Systems Using Continuation for Scientic and Engineering Problems, Prentice-Hall, Englewood Cli, NJ, 1987. [37] J. M. Ortega and W. Rheinbol, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [38] W. C. Rheinbol, Numerical Analysis of Parameterized Nonlinear Equations, John Wiley and Sons, New York, 1986. [39] A. Ruhe, Closest normal matrix nally found!, BIT, 7(1987), 585-595. [40] W. W. Symes, The QR algorithm and scattering for the nite non-periodic Toda lattice, Physica, 4D(198), 75-80. [41] D. S. Watkins, Isospectral ows, SIAM Rev., 6(1984), 379-391. [4] D. S. Watkins and L. Elsner, Self-similar ows, Linear Alg. Appl., 110(1988), 13-4. [43] L. T. Watson, S. C. Billups and A. P. Morgan, HOMPACK: A suite of codes for globally convergent homotopy algorithms, ACM Trans. Math. Software, 13(1987), 81-310. [44] L. T. Watson, Numerical linear algebra aspects of globally convergent homotopy methods, SIAM Review, 8(1986), 59-545. [45] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965. 13