Generalized Shifted Inverse Iterations on Grassmann Manifolds 1
|
|
- Robyn Perry
- 6 years ago
- Views:
Transcription
1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A. Absil β and R. Sepulchre γ α Department of Mathematics, University of Würzburg, Würzburg, Germany. jordan@mathematik.uni-wuerzburg.de β School of Computational Science and Information Technology, Florida State University, Tallahassee FL , USA. absil@csit.fsu.edu γ Department of Electrical Engineering and Computer Science, Institut Monteore, B28 Université de Liège, B-4000 Liège, Belgium. r.sepulchre@ulg.ac.be Abstract: We discuss a family of feedback maps for the generalized Inverse Iterations on the Grassmann manifold. The xed points of the resulting algorithms correspond to the eigenspaces of a given matrix. A sucient condition for local convergence is given. 1 Introduction In many applications it is necessary to nd a p-dimensional eigenspace of a given matrix A. There exist several dierent strategies to design algorithms for eigenspace computation, see for example the approaches in [7, 8, 10]. A classical and very successful algorithm for the case p = 1 and A = A T is the Rayleigh quotient iteration (RQI). Its dynamics can be described on the projective space, see for example [4, 9, 11]. A block version of the RQI method for 1 p < n was proposed in [3]. The iteration was shown to induce an iteration on the Grassmann manifold (i.e. the set of p-dimensional subspaces of R n ) and was therefore called Grassmann- RQI. Assuming A = A T, the Grassmann-RQI is locally cubic convergent to a p-dimensional invariant subspace of A. 1 This paper presents research partially supported by the Belgian Program on Interuniversity Poles of Attraction, initiated by the Belgian State, Prime Minister's Oce for Science, Technology and Culture. This work was completed while the rst author was a guest at the University of Liege under a grant from the Control Training Site (CTS). The second author's work was supported by the National Science Foundation of the USA under Grant ACI and by the School of Computational Science and Information Technology of Florida State University.
2 2 The Grassmann-RQI can be interpreted as a shifted Inverse Iteration on the Grassmann manifold with a certain feedback control. In this paper we want to generalize this idea by using dierent feedback strategies instead of the Rayleigh quotient. We introduce a set of feedback laws which ensure that the corresponding algorithm is well-posed on the Grassmann manifold. Similar to the Grassmann-RQI eigenspaces are corresponding to the xed points of the algorithm. Furthermore we prove local convergence for a certain set of algorithms. The paper is organized as follows. In Section 2 we generalize the Grassmann- RQI. Therefore we introduce a set of feedback maps. In Section 3 we discussed the algebraic structure of the feedback maps. Section 4 deals with the singularities of the algorithm. The correspondence between xed points and eigenspaces is discussed in Section 5. We give sucient criteria for local convergence in Section 6. Finally, we give some concluding remarks in Section 7. 2 The generalized shifted Inverse Iteration The Grassmann Rayleigh Quotient Iteration described in [1, 3] is a subspace iteration on the Grassmann manifold Grass(p, n). It can can be interpreted as a discrete-time system with a certain feedback control. In this section we introduce a family of feedback controls which generalizes this idea. Note that we do not assume symmetry of the matrix A in this section. We use the following notation. With ST(p, n) we denote the set of real full rank n by p matrices. Note that ST(p, n) is open in R n p and has therefore, in a canonical way, a smooth manifold structure (called the noncompact Stiefel manifold). Let π be the canonical projection π : ST(p, n) Grass(p, n) that maps Y ST(p, n) to its column space. For any X, Y ST(p, n) exists M GL p (R) such that X = Y M if and only if π(x) = π(y ). Given an initial iterate Y 0 Grass(p, n), the Grassmann-RQI [3] computes a sequence of subspaces Y t = Φ R (Y t 1 ), t = 1,..., t final, where Φ R is dened as follows. Algorithm 2.1 (Grassmann-RQI mapping Φ R ) Given Y Grass(p, n), 1) Choose Y π 1 (Y), i.e. a matrix Y ST(p, n) with π(y ) = Y. 2) Solve the Sylvester equation AY + Y + R(Y ) = Y (1) with the Rayleigh quotient map R : Y (Y T Y ) 1 Y T AY. 3) Dene Φ R (Y) := Y + := π(y + ). The map Φ R is well dened under the following assumptions: (i) the Sylvester equation (1) admits one and only one solution, (ii) this solution has full rank and (iii) Y + is independent of the choice of Y π 1 (Y). One can show that assumptions (i) and (ii) holds for an open and dense subset of matrices Y ST(p, n).
3 3 Moreover, if (i) and (ii) are fullled then (iii) is fulllled as well. This is due to the following homogeneity property of R X ST(p, n), M GL p (R) : R(XM) = M 1 R(X)M. (2) We call a map F : ST(p, n) GL p (R) with Property (2) a feedback map and denote the set of all feedback maps with F. For any F F we dene the iteration mapping Φ F as follows. Algorithm 2.2 (Grassmann shifted Inverse Iteration mapping Φ F ) Given Y Grass(p, n), 1) Choose Y π 1 (Y). 2) Solve the Sylvester equation 3) Dene Φ F (Y) := Y + := π(y + ). AY + Y + F (Y ) = Y (3) We denote the corresponding iterative algorithm Y k = Φ F (Y k 1 ), the Generalized Shifted Inverse Iteration. Note that Property (2) ensures that the new iterate Y + does not depend on the choice of Y in algorithm step 1. It is possible to choose a time varying F F. This leads to a discrete-time control system Y 0 Grass(p, n), Y t+1 = Φ(F t, Y t ), F t F. (4) In this paper we consider algorithms of type (2.2) with xed F F. But the dynamic properties of (4) will be the aim of future work. 3 The algebra of feedback maps We have dened the Generalized Shifted Inverse Iteration for any feedback map F F. Obviously F is not empty, since the Rayleigh quotient map belongs to F. In this section we give some more examples and show that F has a rich algebraic structure. Theorem 3.1 With multiplication F G : X F (X)G(X), addition F + G : X F (X) + G(X), scalar multiplication λf : X λf (X), zero element X 0 R p p and one element X I R p p, F is a real algebra. Using Theorem (3.1) is not dicult to construct examples of feedback maps. In particular every algebraic combination of the following examples is an element of F. Examples 3.2 1) F 0. This choice of F leads to the (unshifted) Inverse Iteration on Grass(p, n).
4 4 2) The Rayleigh quotient map R : X (X T X) 1 X T AX is an element of F. Note that F := R gives Algorithm ) For W R n p the map F W : X (W T X) 1 W T AX dened for all X with W T X GL p (R) is in F. 4) Let F be a feedback map and f : ST(p, n) R be a map with f(xm) = f(x) for all X ST(p, n) and all M GL p (R). Then the map ff : X f(x)f (X) is an element of F. 5) Let B : ST(p, n) R n n be a map with B(XM) = B(X) for all X ST(p, n) and all M GL p (R). Then the map F B : X (X T X) 1 X T B(X)X is also element of F. One can construct an innite set of linear independent maps F α F. Therefore F has innite dimension as a vector space. 4 Singularities of Φ F In general, there may exist subspaces Y Grass(p, n) for which Φ F (Y) is not a well-dened element of Grass(p, n). This happens if and only if either Equation (3) fails to have an unique solution, or the unique solution fails to have full rank. Remarkably, under convenient conditions on F, Φ F is well dened on a generic subset of Grass(p, n). With M F we denote the set of all matrices Y ST(p, n) such that Equation (3) has a unique solution and this solution has full rank. If X ST(p, n) and Y ST(p, n) represent the same element of Grass(p, n) (i.e. π(x) = π(y )) then X M F if and only if Y M F. Therefore, the question if Φ F is well dened or not, does not depend on the choice of the representation Y of Y Grass(p, n). A reasonable assumption for F is to be a rational functions of the entries y ij of Y ST(p, n) as in the case of the generalized Rayleigh quotient map. Our results also hold for a wider class of feedback maps. We call a continuous map F : A B quasi open if for every S A with nonempty interior F (S) has nonempty interior in B. Theorem 4.1 Let F : R n p R p p rational or quasi open and continuous on ST(p, n). a) The set of matrices Y for which Equation (3) has a unique solution is open and dense, unless F λ p I for any eigenvalue λ p of A. b) π(m F ) is either open and dense in Grass(p, n), or empty. Note that the case π(m F ) = is rather exceptional and easy to verify. particular this is the case if F λ p I for any eigenvalue λ p of A. In 5 Correspondence between xed points and eigenspaces If the feedback law F is an element of F, then the xed points of Algorithm 2.2 are related to the eigenspaces of A. Let V be a xed point of the map
5 5 Φ F : Y Y +, then there exits M GL p (R) such that AY M Y MF (Y ) = Y. (5) Using Property (2) we get AY = Y P with P = (M 1 + F (Y M 1 )) R p p. Thus, Aπ(Y ) π(y ). Theorem 5.1 If Y Grass(p, n) is a xed point of Φ F then Y is an eigenspace of A. Conversely, if Y is an eigenspace of A, then Y is a xed point of Φ F provided that Y π(m F ). Observe that the unshifted algorithm (i.e. the choice F O) reduces to the Inverse Iteration. In this case the set of xed points and the set of eigenspaces of A coincide. In the shifted algorithm no new xed point is created but some eigenspaces may become singularities. This is for instance the case of the Grassmann-RQI. Its very nature makes every eigenspace a singularity of the algorithm, thereby accelerating the rate of convergence. Nevertheless, the Grassmann-RQI mapping Φ R has a continuous extension such that xed points of the extended map coincide with the eigenspaces. 6 Local convergence In the following we want to state a sucient condition on F which guarantees local convergence of Algorithm 2.2 for symmetric matrices A. To measure distances on Grass(p, n) we use d(x, Y) := Π X Π Y 2 where Π X denotes the orthogonal projection on X. Note that the topology induced on Grass(p, n) by the distance d(x, Y) is identical to the one induced by the canonical projection π : ST(p, n) Grass(p, n) (see [6]). Because the following theorem is stated in local coordinates we need some terminology and properties of the geometry on Grass(p, n). Let X Grass(p, n) be a xed element. We choose an orthogonal X π 1 (X ) and X R (n p) p such that Q := (X X ) O n (R). Furthermore, we use the notation ( ) Q T A11 A AQ = 12. (6) A 21 A 22 X is called spectral (with respect to A) if A 11 R p p and A 22 R (n p) (n p) have no eigenvalues in common. For Y Grass(p, n) which is not orthogonal to X (i.e. X T Y GL p (R) for Y π 1 (Y)), pick Ỹ π 1 (Y) and dene σ X (Y) = Ỹ (X T Ỹ ) 1. One easily veries that σ X (Y) is independent of the choice Ỹ π 1 (Y). Thus, the map K X : Grass(p, n) R (n p) p, Y X T σ X(Y) is well dened. Note that K X denes a coordinate chart for Grass(p, n). The distance of a point Y Grass(p, n) which is not orthogonal to X can be approximated in terms of the local coordinate K X (Y) by d(x, Y) = K X (Y) 2 + O( K X (Y) 3 2). (7)
6 6 For a deeper introduction to the geometry on Grass(p, n) see [1, 3, 2, 10]. The following theorem gives a sucient condition for local convergence of Algorithm 2.2. Theorem 6.1 Let A be a symmetric n by n matrix and X be a p-dimensional spectral eigenspace of A. Let X π 1 (X ) be orthogonal and θ > 0 a constant. Let F F be continuous with property F (σ X Y) X T AX 2 = O( K X (Y) θ 2), (8) for all Y in a neighborhood of X. The Grassmann shifted Inverse Iteration mapping Φ F admits a continuous extension on a neighborhood of X. The point X is an attractive xed point of the extended mapping, and the rate of convergence is θ + 1. In particular Theorem 6.1 gives locally cubic convergence for the Rayleigh quotient R(Y ) = (Y T Y ) 1 Y T AY. This result was already proved in [1]. It is possible to construct other maps F which fulll the conditions of the theorem. Nevertheless Condition (8) is certainly a very hard restriction on the choice of F. If one wants to apply Theorem 6.1 to prove cubic convergence close to a certain eigenspace X, F has to behave locally like the Rayleigh quotient map. On the other hand, since we have a freedom in the choice of F F, Algorithm 2.2 may open new possibilities to improve the global behavior of the iteration. 7 Conclusion and future work Given a matrix A R n n we have constructed a family of iterations dened on suciently large subsets of Grass(p, n). The xed points of the algorithms correspond to the p-dimensional eigenspaces of A. Therefore these algorithms may be used for eigenspace calculations. Furthermore, we state a condition for local convergence to the xed points. The Grassmann-RQI can be seen as a particular case with a constant control. In the case p = 1 (i.e. Grass(p, n) = RP n 1 ) the Grassmann-RQI reduces to the well-known Rayleigh quotient iteration which is an ideal shift-strategy in a certain sense and has some useful global properties ([4, 11]). In our future work we want to study control systems of type (4). In particular we want toinvestigate if the generalized Rayleigh quotient map R is (locally) an ideal choice compared with other possible shifts. Furthermore we want to construct feedback strategies which improve the global behavior. References [1] P.-A. Absil, Invariant Subspace Computation: A Geometric Approach, PhD Thesis, Liege (2003). [2] P.-A. Absil and R. Mahony and R. Sepulchre, Riemannian geometry of Grassmann manifolds with a view on algorithmic computation, Acta Appl. Math. 80, No 2, (2004), pp
7 REFERENCES 7 [3] P.-A. Absil, R. Mahony, R. Sepulchre and P. Van Dooren, A Grassmann-Rayleigh Quotient Iteration for Computing Invariant Subspaces, SIAM Review, 44, No 1, (2002), pp [4] S. Batterson and J. Smillie, The dynamics of Rayleigh quotient iteration, SIAM J. Numer. Anal., 26 (1989), pp [5] S. Batterson and J. Smillie, Rayleigh quotient iteration for nonsymmetric matrices, Math. Comp., 55, No 191,(1990), pp [6] J. Ferrer, M.I. García, and F. Puerta, Dierential families of subspaces, Linear Algebra Appl., 199(1994), pp [7] U. Helmke, J. Moore, Optimization and Dynamical Systems, Springer- Verlag, New York, (1994). [8] K. Hüper, A calculus approach to matrix eigenvalue algorithms, Habilitationsschrift, Würzburg (2002). [9] I. C. F. Ipsen, Computing an eigenvector with inverse iteration, SIAM Rev. 39 (1997), pp [10] G.W. Stewart, Error and perturbation bounds for subspace associated with certain eigenvalue problems, Siam Review, 15, No 4, (1973), pp [11] B. N. Parlett, The Rayleigh Quotient Iteration and Some Generalizations for Nonnormal Matrices, Mathematics of Computation, 28, No 127, (1974), pp [12] P. Van Dooren, R. Sepulchre, 'Shift policies in QR like algorithms and feedback control of self-similar ows', Open Problems in Mathematical Systems and Control Theory (V. Blondel, E. Sontag, M. Vidyasagar and J.C. Willems; Eds.), Springer, London (1999), pp
Invariant Subspace Computation - A Geometric Approach
Invariant Subspace Computation - A Geometric Approach Pierre-Antoine Absil PhD Defense Université de Liège Thursday, 13 February 2003 1 Acknowledgements Advisor: Prof. R. Sepulchre (Université de Liège)
More informationAffine iterations on nonnegative vectors
Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider
More informationSTABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin
On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationQUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those
QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.
More informationH 2 -optimal model reduction of MIMO systems
H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m
More informationNote on the convex hull of the Stiefel manifold
Note on the convex hull of the Stiefel manifold Kyle A. Gallivan P.-A. Absil July 9, 00 Abstract In this note, we characterize the convex hull of the Stiefel manifold and we find its strong convexity parameter.
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationVERSAL DEFORMATIONS OF BILINEAR SYSTEMS UNDER OUTPUT-INJECTION EQUIVALENCE
PHYSCON 2013 San Luis Potosí México August 26 August 29 2013 VERSAL DEFORMATIONS OF BILINEAR SYSTEMS UNDER OUTPUT-INJECTION EQUIVALENCE M Isabel García-Planas Departamento de Matemàtica Aplicada I Universitat
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationThe nonsmooth Newton method on Riemannian manifolds
The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationLECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups
LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS 1. Lie groups A Lie group is a special smooth manifold on which there is a group structure, and moreover, the two structures are compatible. Lie groups are
More informationA Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems
A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems C. G. Baker P.-A. Absil K. A. Gallivan Technical Report FSU-SCS-2005-096 Submitted June 7, 2005 Abstract A general inner-outer
More informationContents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces
Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationVariational inequalities for set-valued vector fields on Riemannian manifolds
Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,
More informationPh.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.
PhD Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2 EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system
More informationAn extrinsic look at the Riemannian Hessian
http://sites.uclouvain.be/absil/2013.01 Tech. report UCL-INMA-2013.01-v2 An extrinsic look at the Riemannian Hessian P.-A. Absil 1, Robert Mahony 2, and Jochen Trumpf 2 1 Department of Mathematical Engineering,
More information4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial
Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationEach is equal to CP 1 minus one point, which is the origin of the other: (C =) U 1 = CP 1 the line λ (1, 0) U 0
Algebraic Curves/Fall 2015 Aaron Bertram 1. Introduction. What is a complex curve? (Geometry) It s a Riemann surface, that is, a compact oriented twodimensional real manifold Σ with a complex structure.
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationINTRODUCTION TO LIE ALGEBRAS. LECTURE 2.
INTRODUCTION TO LIE ALGEBRAS. LECTURE 2. 2. More examples. Ideals. Direct products. 2.1. More examples. 2.1.1. Let k = R, L = R 3. Define [x, y] = x y the cross-product. Recall that the latter is defined
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More informationRiemannian Optimization Method on the Flag Manifold for Independent Subspace Analysis
Riemannian Optimization Method on the Flag Manifold for Independent Subspace Analysis Yasunori Nishimori 1, Shotaro Akaho 1, and Mark D. Plumbley 2 1 Neuroscience Research Institute, National Institute
More informationUMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.
UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More information2.10 Saddles, Nodes, Foci and Centers
2.10 Saddles, Nodes, Foci and Centers In Section 1.5, a linear system (1 where x R 2 was said to have a saddle, node, focus or center at the origin if its phase portrait was linearly equivalent to one
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationLINEAR ALGEBRA KNOWLEDGE SURVEY
LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationOptimization on the Grassmann manifold: a case study
Optimization on the Grassmann manifold: a case study Konstantin Usevich and Ivan Markovsky Department ELEC, Vrije Universiteit Brussel 28 March 2013 32nd Benelux Meeting on Systems and Control, Houffalize,
More informationOptimisation on Manifolds
Optimisation on Manifolds K. Hüper MPI Tübingen & Univ. Würzburg K. Hüper (MPI Tübingen & Univ. Würzburg) Applications in Computer Vision Grenoble 18/9/08 1 / 29 Contents 2 Examples Essential matrix estimation
More informationSelf-intersections of Closed Parametrized Minimal Surfaces in Generic Riemannian Manifolds
Self-intersections of Closed Parametrized Minimal Surfaces in Generic Riemannian Manifolds John Douglas Moore Department of Mathematics University of California Santa Barbara, CA, USA 93106 e-mail: moore@math.ucsb.edu
More informationwhere m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism
8. Smoothness and the Zariski tangent space We want to give an algebraic notion of the tangent space. In differential geometry, tangent vectors are equivalence classes of maps of intervals in R into the
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationPerturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.
Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that
More informationonly nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr
The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More information5 Eigenvalues and Diagonalization
Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues
More informationII. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES
II. DIFFERENTIABLE MANIFOLDS Washington Mio Anuj Srivastava and Xiuwen Liu (Illustrations by D. Badlyans) CENTER FOR APPLIED VISION AND IMAGING SCIENCES Florida State University WHY MANIFOLDS? Non-linearity
More informationApril 13, We now extend the structure of the horseshoe to more general kinds of invariant. x (v) λ n v.
April 3, 005 - Hyperbolic Sets We now extend the structure of the horseshoe to more general kinds of invariant sets. Let r, and let f D r (M) where M is a Riemannian manifold. A compact f invariant set
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationThere are six more problems on the next two pages
Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationAn angle metric through the notion of Grassmann representative
Electronic Journal of Linear Algebra Volume 18 Volume 18 (009 Article 10 009 An angle metric through the notion of Grassmann representative Grigoris I. Kalogeropoulos gkaloger@math.uoa.gr Athanasios D.
More informationMemoryless output feedback nullification and canonical forms, for time varying systems
Memoryless output feedback nullification and canonical forms, for time varying systems Gera Weiss May 19, 2005 Abstract We study the possibility of nullifying time-varying systems with memoryless output
More informationLECTURE 8: THE SECTIONAL AND RICCI CURVATURES
LECTURE 8: THE SECTIONAL AND RICCI CURVATURES 1. The Sectional Curvature We start with some simple linear algebra. As usual we denote by ( V ) the set of 4-tensors that is anti-symmetric with respect to
More information3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers
Chapter 3 Real numbers The notion of real number was introduced in section 1.3 where the axiomatic denition of the set of all real numbers was done and some basic properties of the set of all real numbers
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationLECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY
LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationPerturbations preserving conditioned invariant subspaces
Perturbations preserving conditioned invariant subspaces Albert ompta, Josep Ferrer and Marta Peña Departament de Matemàtica Aplicada I. E.T.S. Enginyeria Industrial de Barcelona. UP Diagonal 647. 0808
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationB 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X.
Math 6342/7350: Topology and Geometry Sample Preliminary Exam Questions 1. For each of the following topological spaces X i, determine whether X i and X i X i are homeomorphic. (a) X 1 = [0, 1] (b) X 2
More informationMATH 115A: SAMPLE FINAL SOLUTIONS
MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication
More informationLECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]
LECTURE VII: THE JORDAN CANONICAL FORM MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO [See also Appendix B in the book] 1 Introduction In Lecture IV we have introduced the concept of eigenvalue
More informationExistence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity
Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity Vincent D. Blondel, Julien M. Hendricx and John N. Tsitsilis July 24, 2009 Abstract
More informationFINAL PROJECT TOPICS MATH 399, SPRING αx 0 x < α(1 x)
FINAL PROJECT TOPICS MATH 399, SPRING 2011 MARIUS IONESCU If you pick any of the following topics feel free to discuss with me if you need any further background that we did not discuss in class. 1. Iterations
More informationLECTURE: KOBORDISMENTHEORIE, WINTER TERM 2011/12; SUMMARY AND LITERATURE
LECTURE: KOBORDISMENTHEORIE, WINTER TERM 2011/12; SUMMARY AND LITERATURE JOHANNES EBERT 1.1. October 11th. 1. Recapitulation from differential topology Definition 1.1. Let M m, N n, be two smooth manifolds
More informationIterative methods for symmetric eigenvalue problems
s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient
More informationOptimal Scaling of Companion Pencils for the QZ-Algorithm
Optimal Scaling of Companion Pencils for the QZ-Algorithm D Lemonnier, P Van Dooren 1 Introduction Computing roots of a monic polynomial may be done by computing the eigenvalues of the corresponding companion
More informationRank-Constrainted Optimization: A Riemannian Manifold Approach
Ran-Constrainted Optimization: A Riemannian Manifold Approach Guifang Zhou1, Wen Huang2, Kyle A. Gallivan1, Paul Van Dooren2, P.-A. Absil2 1- Florida State University - Department of Mathematics 1017 Academic
More informationB5.6 Nonlinear Systems
B5.6 Nonlinear Systems 4. Bifurcations Alain Goriely 2018 Mathematical Institute, University of Oxford Table of contents 1. Local bifurcations for vector fields 1.1 The problem 1.2 The extended centre
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationREGULAR TRIPLETS IN COMPACT SYMMETRIC SPACES
REGULAR TRIPLETS IN COMPACT SYMMETRIC SPACES MAKIKO SUMI TANAKA 1. Introduction This article is based on the collaboration with Tadashi Nagano. In the first part of this article we briefly review basic
More informationChapter 1 Vector Spaces
Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationGroup Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.
Group Theory Jan 2012 #6 Prove that if G is a nonabelian group, then G/Z(G) is not cyclic. Aug 2011 #9 (Jan 2010 #5) Prove that any group of order p 2 is an abelian group. Jan 2012 #7 G is nonabelian nite
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationB5.6 Nonlinear Systems
B5.6 Nonlinear Systems 1. Linear systems Alain Goriely 2018 Mathematical Institute, University of Oxford Table of contents 1. Linear systems 1.1 Differential Equations 1.2 Linear flows 1.3 Linear maps
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationLecture Note 12: The Eigenvalue Problem
MATH 5330: Computational Methods of Linear Algebra Lecture Note 12: The Eigenvalue Problem 1 Theoretical Background Xianyi Zeng Department of Mathematical Sciences, UTEP The eigenvalue problem is a classical
More informationHomework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then
Homework 6 Solutions 1 Let V be the real vector space spanned by the functions e t, te t, t 2 e t, e 2t Find a Jordan canonical basis and a Jordan canonical form of T on V dened by T (f) = f Solution Note
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationAlgebraic Varieties. Chapter Algebraic Varieties
Chapter 12 Algebraic Varieties 12.1 Algebraic Varieties Let K be a field, n 1 a natural number, and let f 1,..., f m K[X 1,..., X n ] be polynomials with coefficients in K. Then V = {(a 1,..., a n ) :
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent
More informationMATH 304 Linear Algebra Lecture 8: Vector spaces. Subspaces.
MATH 304 Linear Algebra Lecture 8: Vector spaces. Subspaces. Linear operations on vectors Let x = (x 1, x 2,...,x n ) and y = (y 1, y 2,...,y n ) be n-dimensional vectors, and r R be a scalar. Vector sum:
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationEigenvalues and Eigenvectors
LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are
More informationUnbounded Convex Semialgebraic Sets as Spectrahedral Shadows
Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Shaowei Lin 9 Dec 2010 Abstract Recently, Helton and Nie [3] showed that a compact convex semialgebraic set S is a spectrahedral shadow if the
More information1 Quasi-definite matrix
1 Quasi-definite matrix The matrix H is a quasi-definite matrix, if there exists a permutation matrix P such that H qd P T H11 H HP = 1 H1, 1) H where H 11 and H + H1H 11 H 1 are positive definite. This
More informationPerturbation Theory for Self-Adjoint Operators in Krein spaces
Perturbation Theory for Self-Adjoint Operators in Krein spaces Carsten Trunk Institut für Mathematik, Technische Universität Ilmenau, Postfach 10 05 65, 98684 Ilmenau, Germany E-mail: carsten.trunk@tu-ilmenau.de
More information. Consider the linear system dx= =! = " a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z
Preliminary Exam { 1999 Morning Part Instructions: No calculators or crib sheets are allowed. Do as many problems as you can. Justify your answers as much as you can but very briey. 1. For positive real
More informationCalculus 2502A - Advanced Calculus I Fall : Local minima and maxima
Calculus 50A - Advanced Calculus I Fall 014 14.7: Local minima and maxima Martin Frankland November 17, 014 In these notes, we discuss the problem of finding the local minima and maxima of a function.
More informationMath 315: Linear Algebra Solutions to Assignment 7
Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are
More information