Solving Systems of Linear Differential Equations with Real Eigenvalues

Size: px
Start display at page:

Download "Solving Systems of Linear Differential Equations with Real Eigenvalues"

Transcription

1 Solving Systems of Linear Differential Equations with Real Eigenvalues David Allen University of Kentucky February 18, 2013

2 1 Systems with Real Eigenvalues This section shows how to find solutions to linear systems of differential equations when the eigenvalues of the system matrix are all real. Finding solutions when there are complex eigenvalues is considerably more difficult. But in an instructional setting, most of the concepts can be conveyed within the real eigenvalue context. Back 2

3 A Quasi-triangular Matrix A real matrix that is lower block-triangular with each diagonal block being either a 1 1 matrix or 2 2 matrix with complex eigenvalues is said to be a lower quasi-triangular matrix. A schematic representation is A A 21 A A 31 A 32 A A n1 A n2 A n3 A nn Back 3

4 An Existence Theorem The following is Theorem 3.9 in Stewart [1, page 285]. Fact 1.1. Let A R n n. Then there exists an orthogonal matrix R such that R t AR is quasi-triangular. Moreover, R may be chosen so that any 2 2 diagonal block of R t AR has only complex eigenvalues (which must therefore be conjugates). This decomposition is called the Real Schur form. When all eigenvalues of A are real (as assumed in this presentation), Fact 1.1 implies R t AR is triangular. Back 4

5 A Transformation to Lower Triangular Form The system to be solved is (t) = A (t) (1) with (0) given. If A in (1) is not lower triangular, by Fact 1.1 one may compute an orthogonal matrix R such that R t AR is lower triangular. The algorithm for finding R is far too complicated to present in this class. The algorithm is an intermediate step in solving non-symmetric eigen systems. Refer to the collection of papers in Wilkinson and Reinsch [2]. Back 5

6 Recasting to a Triangular System The system (1) is recast (t) = A (t) (R t (t)) = (R t AR)(R t (t)) (t) = A (t) The algorithm presented in proof of Fact 1.2 that follows, shows how to solve the triangular system. The solution is then back transformed (t) = R (t). Back 6

7 Solving the Triangular System Fact 1.2. Assume a lower triangular matrix A has real elements. The eigenvalues of A are its diagonal elements, so let λ =, = 1,, n. Construct a vector e(t) such that e (t) = t k exp(λ t) where k is the number of occurrences of λ j = λ, j = 1,, 1. Consider the equations (t) = A (t) (2) with (0) given. The solution of is of the form where B is lower triangular matrix. (t) = Be(t) (3) Back 7

8 Construction of e(t) Illustrated To illustrate the construction of e(t), suppose the diagonal elements of A, the eigenvalues of A, are 0, -2, -3, -2, -3. Then 1 exp( 2t) e(t) = exp( 3t) t exp( 2t) t exp( 3t) Consider = 4; λ = 2 and -2 has occurred one time previously, hence, k = 1. Back 8

9 The Proof The proof of Fact 1.2 is constructive in that it gives the recipe for computing B. The proof is by induction. Begin by initializing all elements of B to zero. For = 1 let b 1,1 = 1 (0). Now, with > 1 assume (t) = j=1 b,j e j (t) for = 1,, 1. The th equation is where c j = 1 =1, b,j. 1 (t) λ (t) =,j j (t) j=1 1 = c j e j (t) (4) j=1 Back 9

10 Multiply Equation (4) by exp( λ t): 1 exp( λ t)( (t) λ (t)) = c j exp( λ t)e j (t) (5) Substitute the definition of e j (t) into Equation (5) and integrate both sides: 1 exp( λ t) (t) = c j exp((λ j λ ) t) t k j d t + d (6) j=1 j=1 where d is a constant of integration. Back 10

11 Multiply Equation (6) by exp(λ t): 1 (t) = c j exp(λ t) exp((λ j λ ) t) t k j d t + d exp(λ t). j=1 (7) The integral in each term in equation (7) is to be evaluated. They take a different form depending on the equality or not of λ j and λ. Back 11

12 Case where λ j = λ When λ j = λ, the jth term in Equation (7) evaluates to c j k j + 1 tk j+1 exp(λ t). Let be the index of the next term for which > j and λ = λ j. Add c j k to b,. Back 12

13 Case where λ j = λ When λ j = λ, the jth term in Equation (7) evaluates to k j c j h=0 With h = 0 add k j! (k j h)!(λ λ j ) h+1 tk j h exp(λ j t). (8) c j λ λ j to b,j. If k j > 0 set = j. For each h > 0, find the next smaller value of such that λ = λ j, c and add j k j! to b (k j h)!(λ λ j ) h+1,. Back 13

14 Finding the value of d Let represent the index of the first term where λ = λ. Let c = (k j = 0) b,j j=1 and add (0) c to b,. Proof of Fact 1.2 is complete. Back 14

15 A Numerical Illustration The transformation to triangular form is illustrated numerically here. Consider the kinetic diagram θ 2 θ 1 GI tract Plasma Other θ 4 θ 3 Back 15

16 The System Matrix With θ 1 = 4, θ 2 = 2, θ 3 = 3, and θ 4 = 1, the system matrix is A = The initial conditions are (0) = (100, 0, 0) t. Back 16

17 The Orthogonal Matrix The orthogonal matrix that will transform A to Schur form is R = The Schur matrix is and (0) = R t (0). A = R t AR = Back 17

18 Solution of the Transformed System Apply the algorithm presented above. The solution of the transformed system is (t) = B e(t) exp( 4 t) = exp( t) exp( t) Back 18

19 Solution to the Original System The solution of the original system is (t) = R (t) exp( 4 t) = exp( t) exp( t) Back 19

20 An Exercise Exercise 1.1. Assume the same model and initial conditions as given in the numerical example above, but with θ 1 = 2, θ 2 = 1, θ 3 = 3, and θ 4 = 4. Find the solution for (t) expressed as Be(t). Back 20

21 References [1] G. W. Stewart. Introduction to Matrix Computations. Academic Press, New York, [2] J. H. Wilkinson and C. Reinsch, editors. Handbook for Automatic Computation, volume 2, Linear Algebra. Springer-Verlag, New York, NY, Back 21

Solving Two Variable Systems of Linear Differential Equations

Solving Two Variable Systems of Linear Differential Equations Solving Two Variable Systems of Linear Differential Equations David Allen University of Kentucky February 13, 2013 1 Differential Equations Differential equations are useful for describing dynamic relationships

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

GENERAL ARTICLE Realm of Matrices

GENERAL ARTICLE Realm of Matrices Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto

Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto 7.1 November 6 7.1 Eigenvalues and Eigenvecto Goals Suppose A is square matrix of order n. Eigenvalues of A will be defined. Eigenvectors of A, corresponding to each eigenvalue, will be defined. Eigenspaces

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS n n Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n. Christopher S. Withers and Saralees Nadarajah (Received July 2008)

THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n. Christopher S. Withers and Saralees Nadarajah (Received July 2008) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 38 (2008), 171 178 THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n Christopher S. Withers and Saralees Nadarajah (Received July 2008) Abstract. When a

More information

Chapter 1.6. Perform Operations with Complex Numbers

Chapter 1.6. Perform Operations with Complex Numbers Chapter 1.6 Perform Operations with Complex Numbers EXAMPLE Warm-Up 1 Exercises Solve a quadratic equation Solve 2x 2 + 11 = 37. 2x 2 + 11 = 37 2x 2 = 48 Write original equation. Subtract 11 from each

More information

ACM106a - Homework 4 Solutions

ACM106a - Homework 4 Solutions ACM106a - Homework 4 Solutions prepared by Svitlana Vyetrenko November 17, 2006 1. Problem 1: (a) Let A be a normal triangular matrix. Without the loss of generality assume that A is upper triangular,

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS SECINS 5/54 BSIC PRPERIES F EIGENVUES ND EIGENVECRS / SIMIRIY RNSFRMINS Eigenvalues of an n : there exists a vector x for which x = x Such a vector x is called an eigenvector, and (, x) is called an eigenpair

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

A Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part II: General Algorithm

A Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part II: General Algorithm A Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part II: General Algorithm Takanori Maehara and Kazuo Murota May 2008 / May 2009 Abstract An algorithm is proposed for finding

More information

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1) Chapter 11. 11.0 Introduction Eigensystems An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if A x = λx (11.0.1) Obviously any multiple of an eigenvector x will also be an

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS nn Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis J. Stoer R. Bulirsch Introduction to Numerical Analysis Second Edition Translated by R. Bartels, W. Gautschi, and C. Witzgall With 35 Illustrations Springer Contents Preface to the Second Edition Preface

More information

A Proof of Fundamental Theorem of Algebra Through Linear Algebra due to Derksen. Anant R. Shastri. February 13, 2011

A Proof of Fundamental Theorem of Algebra Through Linear Algebra due to Derksen. Anant R. Shastri. February 13, 2011 A Proof of Fundamental Theorem of Algebra Through Linear Algebra due to Derksen February 13, 2011 MA106-Linear Algebra 2011 MA106-Linear Algebra 2011 We present a proof of the Fundamental Theorem of Algebra

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3. Diagonal matrices

Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3. Diagonal matrices Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3 Diagonal matrices What you need to know already: Basic definition, properties and operations of matrix. What you

More information

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete

More information

Samuel G. Lindlel and David M. Allenl. Abstract

Samuel G. Lindlel and David M. Allenl. Abstract SOME RESULTS ON ELEMENTARY REFLECTORS Samuel G. Lindlel and David M. Allenl BU-553-M3 March, 975 Abstract Elementary reflectors are defined and some known fundamental properties are restated for easy reference.

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Numerical methods for eigenvalue problems

Numerical methods for eigenvalue problems Numerical methods for eigenvalue problems D. Löchel Supervisors: M. Hochbruck und M. Tokar Mathematisches Institut Heinrich-Heine-Universität Düsseldorf GRK 1203 seminar february 2008 Outline Introduction

More information

Components and change of basis

Components and change of basis Math 20F Linear Algebra Lecture 16 1 Components and change of basis Slide 1 Review: Isomorphism Review: Components in a basis Unique representation in a basis Change of basis Review: Isomorphism Definition

More information

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Matrix Operations Matrix Addition and Matrix Scalar Multiply Matrix Multiply Matrix

More information

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson JUST THE MATHS UNIT NUMBER 9.8 MATRICES 8 (Characteristic properties) & (Similarity transformations) by A.J.Hobson 9.8. Properties of eigenvalues and eigenvectors 9.8. Similar matrices 9.8.3 Exercises

More information

Schur s Triangularization Theorem. Math 422

Schur s Triangularization Theorem. Math 422 Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Notation. For any Lie group G, we set G 0 to be the connected component of the identity.

Notation. For any Lie group G, we set G 0 to be the connected component of the identity. Notation. For any Lie group G, we set G 0 to be the connected component of the identity. Problem 1 Prove that GL(n, R) is homotopic to O(n, R). (Hint: Gram-Schmidt Orthogonalization.) Here is a sequence

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Differential equations

Differential equations Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system

More information

B. Differential Equations A differential equation is an equation of the form

B. Differential Equations A differential equation is an equation of the form B Differential Equations A differential equation is an equation of the form ( n) F[ t; x(, xʹ (, x ʹ ʹ (, x ( ; α] = 0 dx d x ( n) d x where x ʹ ( =, x ʹ ʹ ( =,, x ( = n A differential equation describes

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Matrices and Determinants

Matrices and Determinants Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Linear Mixed Models: Methodology and Algorithms

Linear Mixed Models: Methodology and Algorithms Linear Mixed Models: Methodology and Algorithms David M. Allen University of Kentucky March 6, 2017 C Topics from Calculus Maximum likelihood and REML estimation involve the minimization of a negative

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

2.3.6 Exercises NS-3. Q 1.1: What condition(s) must hold for p and q such that a, b, and c are always positive 10 and nonzero?

2.3.6 Exercises NS-3. Q 1.1: What condition(s) must hold for p and q such that a, b, and c are always positive 10 and nonzero? .3. LECTURE 4: (WEEK 3) APPLICATIONS OF PRIME NUMBER 67.3.6 Exercises NS-3 Topic of this homework: Pythagorean triples, Pell s equation, Fibonacci sequence Deliverable: Answers to problems 5 Pythagorean

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Non-degenerate Perturbation Theory. and where one knows the eigenfunctions and eigenvalues of

Non-degenerate Perturbation Theory. and where one knows the eigenfunctions and eigenvalues of on-degenerate Perturbation Theory Suppose one wants to solve the eigenvalue problem ĤΦ = Φ where µ =,1,2,, E µ µ µ and where Ĥ can be written as the sum of two terms, ˆ ˆ ˆ ˆ ˆ ˆ H = H + ( H H ) = H +

More information

Offline Exercises for Linear Algebra XM511 Lectures 1 12

Offline Exercises for Linear Algebra XM511 Lectures 1 12 This document lists the offline exercises for Lectures 1 12 of XM511, which correspond to Chapter 1 of the textbook. These exercises should be be done in the traditional paper and pencil format. The section

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

Linear Mixed Models: Methodology and Algorithms

Linear Mixed Models: Methodology and Algorithms Linear Mixed Models: Methodology and Algorithms David M. Allen University of Kentucky January 8, 2018 1 The Linear Mixed Model This Chapter introduces some terminology and definitions relating to the main

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Matrix Algebra: Definitions and Basic Operations

Matrix Algebra: Definitions and Basic Operations Section 4 Matrix Algebra: Definitions and Basic Operations Definitions Analyzing economic models often involve working with large sets of linear equations. Matrix algebra provides a set of tools for dealing

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

11.5 Reduction of a General Matrix to Hessenberg Form

11.5 Reduction of a General Matrix to Hessenberg Form 476 Chapter 11. Eigensystems 11.5 Reduction of a General Matrix to Hessenberg Form The algorithms for symmetric matrices, given in the preceding sections, are highly satisfactory in practice. By contrast,

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Normal Mode Decomposition of 2N 2N symplectic matrices

Normal Mode Decomposition of 2N 2N symplectic matrices David Rubin February 8, 008 Normal Mode Decomposition of N N symplectic matrices Normal mode decomposition of a 4X4 symplectic matrix is a standard technique for analyzing transverse coupling in a storage

More information

Fundamentals of Unconstrained Optimization

Fundamentals of Unconstrained Optimization dalmau@cimat.mx Centro de Investigación en Matemáticas CIMAT A.C. Mexico Enero 2016 Outline Introduction 1 Introduction 2 3 4 Optimization Problem min f (x) x Ω where f (x) is a real-valued function The

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin

More information

AA 242B / ME 242B: Mechanical Vibrations (Spring 2016)

AA 242B / ME 242B: Mechanical Vibrations (Spring 2016) AA 242B / ME 242B: Mechanical Vibrations (Spring 206) Solution of Homework #3 Control Tab Figure : Schematic for the control tab. Inadequacy of a static-test A static-test for measuring θ would ideally

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 2.1 : 2, 5, 9, 12 2.3 : 3, 6 2.4 : 2, 4, 5, 9, 11 Section 2.1: Unitary Matrices Problem 2 If λ σ(u) and U M n is unitary, show that

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

The QR Algorithm. Marco Latini. February 26, 2004

The QR Algorithm. Marco Latini. February 26, 2004 The QR Algorithm Marco Latini February 26, 2004 Contents 1 Introduction 2 2 The Power and Inverse Power method 2 2.1 The Power Method............................... 2 2.2 The Inverse power method...........................

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Math 108B Professor: Padraic Bartlett Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Week 6 UCSB 2014 1 Lies Fun fact: I have deceived 1 you somewhat with these last few lectures! Let me

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 198 NOTES ON MATRIX METHODS 1. Matrix Algebra Margenau and Murphy, The Mathematics of Physics and Chemistry, Chapter 10, give almost

More information

We have already seen that the main problem of linear algebra is solving systems of linear equations.

We have already seen that the main problem of linear algebra is solving systems of linear equations. Notes on Eigenvalues and Eigenvectors by Arunas Rudvalis Definition 1: Given a linear transformation T : R n R n a non-zero vector v in R n is called an eigenvector of T if Tv = λv for some real number

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Class VIII Chapter 1 Rational Numbers Maths. Exercise 1.1

Class VIII Chapter 1 Rational Numbers Maths. Exercise 1.1 Question 1: Using appropriate properties find: Exercise 1.1 (By commutativity) Page 1 of 11 Question 2: Write the additive inverse of each of the following: (iii) (iv) (v) Additive inverse = Additive inverse

More information

1.6: 16, 20, 24, 27, 28

1.6: 16, 20, 24, 27, 28 .6: 6, 2, 24, 27, 28 6) If A is positive definite, then A is positive definite. The proof of the above statement can easily be shown for the following 2 2 matrix, a b A = b c If that matrix is positive

More information

Lecture 2: Vector-Vector Operations

Lecture 2: Vector-Vector Operations Lecture 2: Vector-Vector Operations Vector-Vector Operations Addition of two vectors Geometric representation of addition and subtraction of vectors Vectors and points Dot product of two vectors Geometric

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer D.S. Stutts November 8, 995 Introduction This primer was written to provide a brief overview of the main concepts and methods in elementary linear algebra. It was not intended to

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

Math Spring 2011 Final Exam

Math Spring 2011 Final Exam Math 471 - Spring 211 Final Exam Instructions The following exam consists of three problems, each with multiple parts. There are 15 points available on the exam. The highest possible score is 125. Your

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information