Iterative Methods for Eigenvalues of Symmetric Matrices as Fixed Point Theorems
|
|
- Justina Shaw
- 6 years ago
- Views:
Transcription
1 Iterative Methods for Eigenvalues of Symmetric Matrices as Fixed Point Theorems Student: Amanda Schaeffer Sponsor: Wilfred M. Greenlee December 6, 007. The Power Method and the Contraction Mapping Theorem Understanding the following derivations, definitions, and theorems may be helpful to the reader. The Power Method Let A be a symmetric n x n matrix with eigenvalues > λ λ 3 λ 4... λ n 0 and corresponding orthonormal eigenvectors u, u, u 3,..., u n. We call the dominant eigenvalue, and u the dominant eigenvector. The Power Method is a basic method of iteration for computing this dominant eigenvector. The algorithm is as follows:. Choose a unit vector, x 0.. Let y = A x Normalize, letting x = y y. 4. Repeat steps and 3, letting y k = A x k, until x k converges.
2 [4] The following shows that the limit of this sequence is the dominant eigenvector, provided that ( x u ) is nonzero. Let x R n. Because the eigenvectors of A are orthonormal and therefore span R n, we can write: x = n i= (α i u i ), where α i R. Then multiplying by the matrix, A, we find: By factoring λ k, we obtain: A x = n i= (α i A u i ) = n i= (α i λ i u i ) A x = n i= (α i Aλ i u i ) = n i= (α i λ i u i )...A k x = n i= (α i λ k i u i ). A k x = λ k ni= α i ( λ i ) k u i = λ k α u + λ k ni= α i ( λ i ) k u i. So since λ λ i for all i =, 3,..., n, we have that A k x λ k = α u + O( λ ) k as k. Definition: Metric Space A set of elements, X, is said to be a metric space if to each pair of elements u, v X, there is associated a real number d(u, v), the distance between u and v, such that:. d(u, v) > 0 for u, v distinct,. d(u, u) = 0, 3. d(u, v) = d(v, u), 4. d(u, w) d(u, v) + d(v, w) (The Triangle Inequality holds). [5]
3 Definition: Complete Metric Space Let X be a metric space. X is said to be complete if every Cauchy sequence in X converges to a point of X. Definition: Contraction Let X be a metric space. A transformation, T : X X, is called a contraction if for some fixed ρ <, d(t u, T v) ρd(u, v) for all u, v X. [5] The Contraction Mapping Theorem Let T be a contraction on a complete metric space X. Then there exists exactly one solution, u X, to u = T u. [5] Note: We leave the proof of this theorem to the discussion of our specific example below.. A contraction for finding the dominant eigenvector Let A be a symmetric n x n matrix with eigenvalues > λ λ 3 λ 4... λ n 0 and corresponding orthonormal eigenvectors u, u, u 3,..., u n. Let x, y R n. Because the eigenvectors of A are orthonormal and therefore span R n, we can write: x = n i= (α i u i ) and y = n i= (β i u i ) with α i, β i R. These can also be defined as: α i = ( x u i ), β i = ( y u i ). The Contraction Define the transformation T = A. Then we have the following: T x = α u + i> α i ( λ i ) u i T y = β u + i> β i ( λ i ) u i T x T y = (α β ) u + i>(α i β i )( λ i ) u i. 3
4 Now, consider T x T y. In order to have a contraction, we want some 0 < k < such that T x T y < k x y. To find this, we will use T x T y. Then we want our k such that T x T y < k x y. Since the u i s are orthonormal, we find that In the case that β = α, T x T y = (α β ) + i>((α i β i ) ( λ i ) ). T x T y = i>((α i β i ) ( λ i ) ) ( λ ) i>((α i β i ) ) = ( λ ) x y so, T x T y ( λ ) x y. Then T x T y λ x y. Therefore, when β = α, we have T as a contraction with k = λ. The Fixed Point Let x 0 R n be our initial guess for finding an eigenvector. Define: V = { y R n ( y u ) = ( x 0 u )}. (ie, V is the set of vectors such that β = α ). Then T maps V back into V. Because this set is a closed subset of R n, and R n is a complete metric space, V is also complete. Then by the Contraction Mapping Theorem, there is exactly one y f V such that y f = T y f. It follows (from the below proof) that y f = ( x 0 u ) u is the unique fixed point. We now prove the Contraction Mapping Theorem in our context, as mentioned in Part. Proof: (a) Uniqueness : The first thing we need to show is that the fixed point of our contraction is unique. Suppose u = T u and v = T v. Then v u = 4
5 T v T u. But then, T v T u λ v u. So by substitution, v u λ v u. But then v u = 0, so v = u. Thus, there is only one fixed point, y f. (b) Existence : Next, we want to show that there exists a fixed point at all. Again, let x 0 be our initial guess. Define x m = T x m. Then we have: x m x m+ = T x m T x m λ x m x m... λ m x 0 x. Therefore, for n > m, we see that x m x n x m x m x n x n λ m x 0 x λ n x 0 x = x 0 x λ m ( + λ + λ λ n m ) x 0 x λ m. λ Note that since 0 < λ <, we replaced the geometric sum with the sum of the infinite series. Now we see that {x k } forms a Cauchy sequence, and thus converges to an element of V since V is complete. Call the limit x. Then T is a contraction, and hence continuous, so x is the fixed point, since: x = lim n x n = lim n T x n = T (lim n x n ) = T x. (c) The fixed point x is y f = ( x 0 u ) u : Finally, we want to find the fixed point for our particular contraction, T = A. Consider T ( x 0 u ) u. This is A ( x 0 u ) u = ( x 0 u ) u. Then this is the fixed point. The Schwartz Quotient Because we do not know, we must find an adequate approximation. For this, we use the Schwartz quotient. Consider a positive symmetric matrix A. We define the Schwartz quotient, Λ (m), as follows: Λ (m) = ( Am+ x 0 A m x 0 A m x 0 A m x 0 ) We show that this converges to, provided α = ( x 0 u ) 0. 5
6 since λ i < for each i =,..., n Λ (m) = ( Am+ x 0 A m x 0 A m x 0 A m x 0 ) α = λ + n i= ( λ i ) λ m+ α i α + n i= ( λ i ) λ m α i, But also, Λ (m) α = λ + n i= ( λ i ) λ m+ α i α + n i= ( λ i ) λ m α i α λ α + n i= ( λ i ) λ m α i α λ α +( λ ) λ m n i= α i α = λ α +( λ ) λ m ( α ) (Notice: α + n i= α i = n i= α i = x 0 =, since x 0 is normalized.) So we find that and thus, α α +( λ ) m ( α ) Λ(m), Λ (m) = + O( λ ) m. Notice that for a negative symmetric matrix, the inequalities are reversed, though the end result remains unchanged. Also note that the asymptotic conclusion Λ (m) = + O( λ ) m holds without the assumption that A is definite. This follows by use of a finite geometric series argument based on the explicit formula Λ (m) α = λ + n i= ( λ i ) λ m+ α i α + n. i= ( λ i ) λ m α i 6
7 Then we define a new set of transformations S m : V V such that S m = A Λ (m). By applying S m to A and normalizing, we iterate to find y f, a procedure which converges to the dominant eigenvector, u, as in the case with the original mapping. Note: There are two manners of doing this iteration. In the way we analyze below, call this Version I, we iterate the Schwartz quotient until it is satisfactorily close to the dominant eigenvalue, and then we iterate using the corresponding S m to find the dominant eigenvector. However, in the example below, we use what we will call Version II of the iteration. We found the corresponding S m for each iteration of the Schwartz quotient, and thus with each iteration, the transformation was different. While this version of the iteration still converges to the dominant eigenvector, finding the rate of convergence is more difficult than that of the first version due to the nature of this type of iteration. In the following sections, we will find the rate of convergence of Version II. Then Recall: T x 0 = A ( x 0 ) = α u + i> α i ( λ i ) u i Now we find that, using Version I: S m ( x 0 ) = A λ ( x Λ (m) 0 ) = α u Λ (m) + n i= ( λ i )α Λ (m) i u i T m ( x 0 ) Sm( m x 0 ) = α ( ) m u Λ (m) + i> α i [( λ i ) m ( λ i ) m ] u Λ (m) i T m ( x 0 ) S m m( x 0 ) = α ( Λ (m) ) m + i> α i [( λ i ) m ( λ i Λ (m) ) m ] = α ( Λ (m) ) m + i> α i ( λ i ) m [ ( Λ (m) ) m ] But, recall: So T m ( x 0 ) S m m( x 0 ) = O( Λ (m) ) m Λ (m) = + O( λ ) m 7
8 Now, T m ( x 0 ) Sm( m λ x 0 ) = O( +O( λ )m ) m = O( λ ) m S m m( x 0 ) α u = S m m( x 0 ) T m ( x 0 ) + T m ( x 0 ) α u S m m( x 0 ) T m ( x 0 ) + T m ( x 0 ) α u by the triangle inequality. So, = O( λ ) m + O( λ ) m S m m( x 0 ) α u = O( λ ) m = T m ( x 0 ) α u. Then these transformations have the same rate of convergence. 3. Example ) Consider the following 3 3 matrix (found in [3], Ex. 4--): This matrix has known eigenvalues and corresponding eigenvectors ; λ 9.470; λ , u , 0.679, 0.3 ; u , , ; u , ,
9 Using the Schwartz Quotient to find the approximate eigenvalue, Λ (m), and Version II of our fixed point mapping of the power method to find the approximate eigenvector, v m, and beginning with an initial guess x 0 =,,, we find: m Λ (m) v m ,., , , , , , , , , , , , 0.68, , , , , , , , 0.676, Monotonicity of Schwartz Quotients In this section, we alter a proof found in Collatz [] to find that in the two cases of a symmetric, positive definite matrix or negative definite matrix, the sequence of Schwartz Quotients, {Λ (m) }, where Λ (m) = ( Am+ x 0 A m x 0 A m x 0 A m x 0 ), is monotone increasing or decreasing, respectively. This is also the behavior observed in the preceding example, with an indefinite matrix. Case : The positive definite symmetric matrix Consider a matrix, A, which is positive definite. The m-th term in the sequence of Schwartz Quotients is Λ (m) = ( Am+ x 0 A m x 0 A m x 0 A m x 0 ). Define a m+ = A m+ x 0 A m x 0 and a m = A m x 0 A m x 0. Then Λ (m) = a m+ a m. Note: In a similar manner, we define a m as (A m x 0 A m x 0 ), etc. 9
10 Then, let Q = a m+ A m x 0 a m A m+ x 0. This value is obviously nonnegative. But then Q = (a m+ A m x 0 a m A m+ x 0 ) (a m+ A m x 0 a m A m+ x 0 ) By the bilinearity of the inner product, this is Then and hence = (a m+ A m x 0 ) (a m+ A m x 0 ) (a m+ A m x 0 ) (a m A m+ x 0 ) (a m A m+ x 0 ) (a m+ A m x 0 ) + (a m A m+ x 0 ) (a m A m+ x 0 ) = (a m+a m ) (a m+ a m a m+ ) + (a ma m+ ) = (a ma m+ ) (a m+a m ) Using this result, we find that (a ma m+ ) (a m+a m ) 0, (a ma m+ ) (a m+a m ). Q (a ma m+ ) = (a m+ a m+ a m+ a m ) 0 () since we assume A positive definite, and thus a m+ > 0. Now define Q = (a m+ A m x 0 a m A m+ x 0 ) (a m+ A m x 0 a m A m x 0 ). Since A is positive, we know Q is non-negative. But then, So, consider now and thus Q = a m+a m a ma m+ + a ma m+ = a m+ (a m+ a m a m) Q a m a m a m+. This gives us: a m+ a m a m a m 0, a m+ a m a m a m () Then by () and (), we see that Λ (m) Λ (m+). So we have shown that for a positive definite matrix A, the sequence of Schwartz quotients is monotone 0
11 increasing. Case : The negative definite symmetric matrix An analogous result can easily be seen for a negative definite matrix, where the in () and () are replaced with since A is negative. Thus we find that in this case, the sequence of Schwartz quotients is monotone decreasing. Case 3: The indefinite symmetric matrix Unfortunately, we have yet to show a similar result for the case of the indefinite matrix. However, we know that the Schwartz quotients still converge to the dominant eigenvalue, and the mapping converges in a similar way as in the first two cases, which we now show for version II. 5. Convergence of the Fixed Point Mapping, Version II Suppose A is a definite matrix and x 0 is our initial guess. Note that, using Version II of our fixed point mapping, S x 0 = A Λ () ( x 0 ) = α Λ () u + n i= ( λ i Λ () )α i u i. If we let this = x, then in the next iteration, x = S x = A λ ( x Λ () ) = α u Λ () Λ () + n i= ( )α Λ () Λ () i u i. Continuing in this manner, we arrive at the m-th iteration, λ i Now, λ S m x m = α m u Λ () Λ ()...Λ (m) + n i= ( )α Λ () Λ ()...Λ (m) i u i λ m i λ m i n i= ( )α Λ () Λ ()...Λ (m) i u i λ m i = n i= ( )α (Λ () Λ ()...Λ (m) ) i
12 = ni= λ m (Λ () Λ ()...Λ (m) ) i αi λ m (Λ () Λ ()...Λ (m) ) ( α ) Now, let 0 < k < be such that k > λ. Because Λ (m) converges to, there exists some index, M, such that for all m M, Then for m > M, Since ( λ m Λ (m) > k > λ. λ M = ( )( (Λ () Λ ()...Λ (m) ) (Λ () Λ ()...Λ (M) ) λ M ( )( λ (Λ () Λ ()...Λ (M) ) k ) m M = ( λ M )( kλ (Λ () Λ ()...Λ (M) ) λ m M ) (Λ (M+)...Λ (m) ) λ M )( kλ (Λ () Λ ()...Λ (M) ) = O( λ k ) m as m. λ ) M is independent of m > M. λ ) M ( λ k ) m So we see that S m x m approaches a multiple of u, as does the usual power method. 6. Applying Our Contraction to the Shifted Inverse Power Method The Shifted Inverse Power Method The Shifted Inverse Power Method is an alteration to the Power Method which finds the eigenvalue of a matrix closest to a chosen value rather than finding the dominant eigenvalue. The basic idea is to choose a value, q, and rather than using the original matrix, A, find (A qi) and apply the Power Method to this shifted inverse matrix. The matrix (A qi) has eigenvalues,,..., q λ q λ n q and corresponding eigenvectors u,..., u n, where the λ i s are again the eigenvalues of A, > λ λ 3 λ 4... λ n > 0, and the u i s are the corresponding
13 orthonormal eigenvectors to the λ i s in A. As in the original case, the Power Method will approximate the dominant eigenvalue of (A qi), which is, where λ λ j q j is the closest eigenvalue of A to q. Our Contraction The mapping used in this case is completely analogous to in the case of the original Power Method. We define T inv by: T inv x = (λ j q)(a qi) x, where again λ j is the closest eigenvalue of A to q. For any x, we again have that x = n i= (α i u i ) since the eigenvectors span R n. Then we have that: Taking the m-th iteration, T inv x = (λ j q) n i= ( )(α λ i q i u i ) = ( λ j q )α λ j q j u j + i j( λ j q )(α λ i q i u i ) = α j u j + i j( λ j q )(α λ i q i u i ) T m inv x = α j u j + i j( λ j q λ i q )m (α i u i ) Since λ j q < λ i q for all i j, we have that T m inv α j u j as m. Notice that if we let (A qi) = B, then T inv = B γ, where γ is the dominant eigenvalue of B. Then T inv is equivalent to the original mapping, T, for the matrix B. Clearly, then, we can take the Schwartz quotients with respect to B = (A qi), approximating the eigenvalues, Call the m-th Schwartz λ i q. quotient Λ (m) B. Now, with the m-th iteration, we make T inv = B. This is Λ (m) B completely analogous to the mapping for the original power method, and thus has convergence O( λ k q λ j q A to q and λ k is the next closest. ) m = O( λ j q λ k q )m where λ j is the closest eigenvalue of 3
14 References:. L. Collatz, Eigenwertprobleme und ihre Numerische Behandlung, Chelsea, New York, NY, N. J. Higham, Review of Spectra and Pseudospectra: The Behavior of Nonnomral Matrices and Operators, by L. N. Trefethen and M. Embree, Princeton University Press, Princeton, NJ, 005; in Bull AMS (44), 77-84, A. N. Langville and C.D. Meyer, Google s Page Rank and Beyond, Princeton University Press, Princeton, NJ, B.N. Parlett, The Symmetric Eigenvalue Problem, Prentice Hall, Englewood Cliffs, NJ, I. Stakgold, Green s Functions and Boundary Value Problems, Wiley- Interscience, New York, NY, G.W. Stewart, Introduction to Matrix Computations, Academic Press, New York, NY,
Orthonormal Bases Fall Consider an inner product space V with inner product f, g and norm
8.03 Fall 203 Orthonormal Bases Consider an inner product space V with inner product f, g and norm f 2 = f, f Proposition (Continuity) If u n u 0 and v n v 0 as n, then u n u ; u n, v n u, v. Proof. Note
More informationSolutions, Project on p-adic Numbers, Real Analysis I, Fall, 2010.
Solutions, Project on p-adic Numbers, Real Analysis I, Fall, 2010. (24) The p-adic numbers. Let p {2, 3, 5, 7, 11,... } be a prime number. (a) For x Q, define { 0 for x = 0, x p = p n for x = p n (a/b),
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationAngle contraction between geodesics
Angle contraction between geodesics arxiv:0902.0315v1 [math.ds] 2 Feb 2009 Nikolai A. Krylov and Edwin L. Rogers Abstract We consider here a generalization of a well known discrete dynamical system produced
More informationSPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS
SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationThe Perron Frobenius theorem and the Hilbert metric
The Perron Frobenius theorem and the Hilbert metric Vaughn Climenhaga April 7, 03 In the last post, we introduced basic properties of convex cones and the Hilbert metric. In this post, we loo at how these
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationMath 117: Infinite Sequences
Math 7: Infinite Sequences John Douglas Moore November, 008 The three main theorems in the theory of infinite sequences are the Monotone Convergence Theorem, the Cauchy Sequence Theorem and the Subsequence
More informationIn these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.
1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear
More informationMATH 117 LECTURE NOTES
MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set
More informationRecall that any inner product space V has an associated norm defined by
Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]
ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] Inner products and Norms Inner product or dot product of 2 vectors u and v in R n : u.v = u 1 v 1 + u 2 v 2 + + u n v n Calculate u.v when u = 1 2 2 0 v = 1 0
More informationMath 414: Linear Algebra II, Fall 2015 Final Exam
Math 414: Linear Algebra II, Fall 2015 Final Exam December 17, 2015 Name: This is a closed book single author exam. Use of books, notes, or other aids is not permissible, nor is collaboration with any
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationy 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.
6.. Length, Angle, and Orthogonality In this section, we discuss the defintion of length and angle for vectors and define what it means for two vectors to be orthogonal. Then, we see that linear systems
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationHILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define
HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationEigenvalues, random walks and Ramanujan graphs
Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationLinear Equations and Vectors
Chapter Linear Equations and Vectors Linear Algebra, Fall 6 Matrices and Systems of Linear Equations Figure. Linear Algebra, Fall 6 Figure. Linear Algebra, Fall 6 Figure. Linear Algebra, Fall 6 Unique
More informationChapter 0: Preliminaries
Chapter 0: Preliminaries Adam Sheffer March 28, 2016 1 Notation This chapter briefly surveys some basic concepts and tools that we will use in this class. Graduate students and some undergrads would probably
More informationAn Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control
21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 WeC14.1 An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control Qing Liu and Douglas
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More information(v, w) = arccos( < v, w >
MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationMIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS
MIDTERM I LIEAR ALGEBRA MATH 215 Friday February 16, 2018. ame PRACTICE EXAM SOLUTIOS Please answer the all of the questions, and show your work. You must explain your answers to get credit. You will be
More informationLinear Algebra and Matrix Theory
Linear Algebra and Matrix Theory Part 3 - Linear Transformations 1 References (1) S Friedberg, A Insel and L Spence, Linear Algebra, Prentice-Hall (2) M Golubitsky and M Dellnitz, Linear Algebra and Differential
More informationSYLLABUS UNDER AUTONOMY MATHEMATICS
SYLLABUS UNDER AUTONOMY SEMESTER III Calculus and Analysis MATHEMATICS COURSE: A.MAT.3.01 [45 LECTURES] LEARNING OBJECTIVES : To learn about i) lub axiom of R and its consequences ii) Convergence of sequences
More informationHilbert Spaces: Infinite-Dimensional Vector Spaces
Hilbert Spaces: Infinite-Dimensional Vector Spaces PHYS 500 - Southern Illinois University October 27, 2016 PHYS 500 - Southern Illinois University Hilbert Spaces: Infinite-Dimensional Vector Spaces October
More informationEcon Lecture 3. Outline. 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n
Econ 204 2011 Lecture 3 Outline 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n 1 Metric Spaces and Metrics Generalize distance and length notions
More informationMATH 235: Inner Product Spaces, Assignment 7
MATH 235: Inner Product Spaces, Assignment 7 Hand in questions 3,4,5,6,9, by 9:3 am on Wednesday March 26, 28. Contents Orthogonal Basis for Inner Product Space 2 2 Inner-Product Function Space 2 3 Weighted
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationSpectral Theory, with an Introduction to Operator Means. William L. Green
Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps
More informationMath 61CM - Solutions to homework 6
Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric
More informationThe Geometric Tools of Hilbert Spaces
The Geometric Tools of Hilbert Spaces Erlendur Karlsson February 2009 c 2009 by Erlendur Karlsson All rights reserved No part of this manuscript is to be reproduced without the writen consent of the author
More informationBlackwell s Approachability Theorem: A Generalization in a Special Case. Amy Greenwald, Amir Jafari and Casey Marks
Blackwell s Approachability Theorem: A Generalization in a Special Case Amy Greenwald, Amir Jafari and Casey Marks Department of Computer Science Brown University Providence, Rhode Island 02912 CS-06-01
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationArchive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker
Archive of past papers, solutions and homeworks for MATH 224, Linear Algebra 2, Spring 213, Laurence Barker version: 4 June 213 Source file: archfall99.tex page 2: Homeworks. page 3: Quizzes. page 4: Midterm
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationINNER PRODUCT SPACE. Definition 1
INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationMAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3. (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers.
MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3 (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers. (a) Define d : V V + {0} by d(x, y) = 1 ξ j η j 2 j 1 + ξ j η j. Show that
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More information632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method
632 CHAP 11 EIGENVALUES AND EIGENVECTORS QR Method Suppose that A is a real symmetric matrix In the preceding section we saw how Householder s method is used to construct a similar tridiagonal matrix The
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationInner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n
Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2
More informationLAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM
LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM ORIGINATION DATE: 8/2/99 APPROVAL DATE: 3/22/12 LAST MODIFICATION DATE: 3/28/12 EFFECTIVE TERM/YEAR: FALL/ 12 COURSE ID: COURSE TITLE: MATH2800 Linear Algebra
More informationLecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differenti. equations.
Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differential equations. 1 Metric spaces 2 Completeness and completion. 3 The contraction
More informationStandardization and Singular Value Decomposition in Canonical Correlation Analysis
Standardization and Singular Value Decomposition in Canonical Correlation Analysis Melinda Borello Johanna Hardin, Advisor David Bachman, Reader Submitted to Pitzer College in Partial Fulfillment of the
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationSection 7.5 Inner Product Spaces
Section 7.5 Inner Product Spaces With the dot product defined in Chapter 6, we were able to study the following properties of vectors in R n. ) Length or norm of a vector u. ( u = p u u ) 2) Distance of
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationIterative Methods for Ax=b
1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative
More informationNumerical methods for eigenvalue problems
Numerical methods for eigenvalue problems D. Löchel Supervisors: M. Hochbruck und M. Tokar Mathematisches Institut Heinrich-Heine-Universität Düsseldorf GRK 1203 seminar february 2008 Outline Introduction
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More information1 9/5 Matrices, vectors, and their applications
1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric
More informationComputing tomographic resolution matrices using Arnoldi s iterative inversion algorithm
Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices
More informationReview and problem list for Applied Math I
Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know
More information(v, w) = arccos( < v, w >
MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationExtreme Values and Positive/ Negative Definite Matrix Conditions
Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationInner Product Spaces 6.1 Length and Dot Product in R n
Inner Product Spaces 6.1 Length and Dot Product in R n Summer 2017 Goals We imitate the concept of length and angle between two vectors in R 2, R 3 to define the same in the n space R n. Main topics are:
More informationLinear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016
Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationMATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT
MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT The following is the list of questions for the oral exam. At the same time, these questions represent all topics for the written exam. The procedure for
More informationOn linear and non-linear equations. (Sect. 1.6).
On linear and non-linear equations. (Sect. 1.6). Review: Linear differential equations. Non-linear differential equations. The Picard-Lindelöf Theorem. Properties of solutions to non-linear ODE. The Proof
More information(x, y) = d(x, y) = x y.
1 Euclidean geometry 1.1 Euclidean space Our story begins with a geometry which will be familiar to all readers, namely the geometry of Euclidean space. In this first chapter we study the Euclidean distance
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationORIE 6334 Spectral Graph Theory September 22, Lecture 11
ORIE 6334 Spectral Graph Theory September, 06 Lecturer: David P. Williamson Lecture Scribe: Pu Yang In today s lecture we will focus on discrete time random walks on undirected graphs. Specifically, we
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationSPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS
SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint
More informationFinding eigenvalues for matrices acting on subspaces
Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department
More information18.06SC Final Exam Solutions
18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationAn introduction to some aspects of functional analysis
An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms
More information1. Foundations of Numerics from Advanced Mathematics. Linear Algebra
Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationIntertibility and spectrum of the multiplication operator on the space of square-summable sequences
Intertibility and spectrum of the multiplication operator on the space of square-summable sequences Objectives Establish an invertibility criterion and calculate the spectrum of the multiplication operator
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More information