In this document, if A:

Similar documents
too many conditions to check!!

Symmetric Matrices and Quadratic Forms

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

Matrix Algebra from a Statistician s Perspective BIOS 524/ Scalar multiple: ka

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

Elementary Linear Algebra

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

Eigenvalues and Eigenvectors

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

1 Last time: similar and diagonalizable matrices

MAT2400 Assignment 2 - Solutions

Matrix Algebra 2.2 THE INVERSE OF A MATRIX Pearson Education, Inc.

, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx)

LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK)

CHAPTER 5. Theory and Solution Using Matrix Techniques

Example 1.1 Use an augmented matrix to mimic the elimination method for solving the following linear system of equations.

Real Numbers R ) - LUB(B) may or may not belong to B. (Ex; B= { y: y = 1 x, - Note that A B LUB( A) LUB( B)

MATH10212 Linear Algebra B Proof Problems

PART 2: DETERMINANTS, GENERAL VECTOR SPACES, AND MATRIX REPRESENTATIONS OF LINEAR TRANSFORMATIONS

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Lecture 8: October 20, Applications of SVD: least squares approximation

denote the set of all polynomials of the form p=ax 2 +bx+c. For example, . Given any two polynomials p= ax 2 +bx+c and q= a'x 2 +b'x+c',

COLLIN COUNTY COMMUNITY COLLEGE COURSE SYLLABUS CREDIT HOURS: 3 LECTURE HOURS: 3 LAB HOURS: 0

The Jordan Normal Form: A General Approach to Solving Homogeneous Linear Systems. Mike Raugh. March 20, 2005

B = B is a 3 4 matrix; b 32 = 3 and b 2 4 = 3. Scalar Multiplication

(VII.A) Review of Orthogonality

M 340L CS Homew ork Set 6 Solutions

M 340L CS Homew ork Set 6 Solutions

AH Checklist (Unit 3) AH Checklist (Unit 3) Matrices

Stochastic Matrices in a Finite Field

LINEAR ALGEBRA. Paul Dawkins

For a 3 3 diagonal matrix we find. Thus e 1 is a eigenvector corresponding to eigenvalue λ = a 11. Thus matrix A has eigenvalues 2 and 3.

) is a square matrix with the property that for any m n matrix A, the product AI equals A. The identity matrix has a ii

( ) ( ) ( ) notation: [ ]

A brief introduction to linear algebra

5.1 Review of Singular Value Decomposition (SVD)

Linearly Independent Sets, Bases. Review. Remarks. A set of vectors,,, in a vector space is said to be linearly independent if the vector equation

After the completion of this section the student should recall

Linear Transformations

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

LINEARIZATION OF NONLINEAR EQUATIONS By Dominick Andrisani. dg x. ( ) ( ) dx

THE CONSERVATIVE DIFFERENCE SCHEME FOR THE GENERALIZED ROSENAU-KDV EQUATION

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine

Diagonalization of Quadratic Forms. Recall in days past when you were given an equation which looked like

Partial Differential Equations

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = =

HWA CHONG INSTITUTION JC1 PROMOTIONAL EXAMINATION Wednesday 1 October hours. List of Formula (MF15)

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e.

THE KATONA THEOREM FOR VECTOR SPACES

Why learn matrix algebra? Vectors & Matrices with statistical applications. Brief history of linear algebra

Chapter Unary Matrix Operations

Introduction to Optimization Techniques

Chapter 3 Inner Product Spaces. Hilbert Spaces

Introduction. Question: Why do we need new forms of parametric curves? Answer: Those parametric curves discussed are not very geometric.

PAPER : IIT-JAM 2010

Matrices and vectors

Numerical Methods for Finding Multiple Solutions of a Superlinear Problem

R is a scalar defined as follows:

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

U8L1: Sec Equations of Lines in R 2

CHAPTER I: Vector Spaces

24 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS

Chapter Vectors

MATH : Matrices & Linear Algebra Spring Final Review

CHAPTER 3. GOE and GUE

Chimica Inorganica 3

Several variables and partial derivatives

Abstract Vector Spaces. Abstract Vector Spaces

Mon Feb matrix inverses. Announcements: Warm-up Exercise:

The z Transform. The Discrete LTI System Response to a Complex Exponential

(I.C) Matrix algebra

HILBERT-SCHMIDT AND TRACE CLASS OPERATORS. 1. Introduction

4. Determinants. det : { square matrices } F less important in mordern & practical applications but in theory

Basic Iterative Methods. Basic Iterative Methods

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains.

and the sum of its first n terms be denoted by. Convergence: An infinite series is said to be convergent if, a definite unique number., finite.

Machine Learning for Data Science (CS 4786)

6. Cox Regression Models. (Part I)

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by Manisha Bhardwaj

M A T H F A L L CORRECTION. Algebra I 1 4 / 1 0 / U N I V E R S I T Y O F T O R O N T O

Physics 324, Fall Dirac Notation. These notes were produced by David Kaplan for Phys. 324 in Autumn 2001.

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 11

Fluids Lecture 17 Notes

Orthogonal transformations

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1)

Algebra of Least Squares

Math Solutions to homework 6

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains.

NBHM QUESTION 2007 Section 1 : Algebra Q1. Let G be a group of order n. Which of the following conditions imply that G is abelian?

TEACHER CERTIFICATION STUDY GUIDE

A widely used display of protein shapes is based on the coordinates of the alpha carbons - - C α

The Discrete Fourier Transform

Signal Processing in Mechatronics

U8L1: Sec Equations of Lines in R 2

8. Applications To Linear Differential Equations

5.1. The Rayleigh s quotient. Definition 49. Let A = A be a self-adjoint matrix. quotient is the function. R(x) = x,ax, for x = 0.

Transcription:

m I this docmet, if A: is a m matrix, ref(a) is a row-eqivalet matrix i row-echelo form sig Gassia elimiatio with partial pivotig as described i class.

Ier prodct ad orthogoality What is the largest possible magitde of 1,? By Cachy-Byakovsky-Schwarz ieqality, 1, 1. If 1 ad are orthogoal, fid. 1 By the properties of the -orm ad orthogoal vectors, 1 1 1 1 1 1 0, 1,,, 0,, 1 so. 1 1 Defie a agle betwee two vectors 1 ad. If, cos, the 1 1 cos, 1 1, so fid the iverse cosie of the give ratio. Two vectors 3 are colliear if 0 0 or 180 ad two vectors are orthogoal if 90 or 70. A collectio of vectors, 1, are ot mtally orthogoal ad ot ormalized. Apply the Gram-Schmidt algorithm. See the text book, bt i essece: 1. or i from 1 to do a. Set vi i ; b. Sbtract off the projectio of v i oto v ˆ j for each of the previos i 1 ormalized vectors: or j from 1 to i 1 do, v v vˆ, v v ˆ i i j i j c. Normalize the i th vector assmig that vi 0: i Set vˆ i v v i. If the vectors, 1, are liearly idepedet, this will prodce orthoormal vectors.

Liear idepedece Give a collectio of vectors v,..., 1 v i m, determie if they are liearly idepedet. Create the matrix V v1 v ad fid the row-eqivalet matrix ref(a) i row-echelo form. If the rak of V eqals, the vectors are liearly idepedet; otherwise, they are liearly depedet. Corollary If > m, the vectors mst be liearly depedet, for the maximm rak of V is m. The spa of a set of vectors v,..., 1 v V icldes all liear combiatios of these vectors, so all vectors of the form v v 1 1 where,, 1. Is this a sbspace of V? Yes. The sm of two liear combiatios of vectors mst still be a liear combiatio of these vectors, ad mltiplyig a liear combiatio of a set of vectors is still a liear combiatio of these vectors. Give a collectio of vectors v,..., 1 v i m, fid the dimesio of ad a basis for the spa. Create the matrix V v1 v ad fid the row-eqivalet matrix ref(a) i row-echelo form. The dimesio of the spa is the rak of A. A basis for the spa are those colms of A that correspod to colms i ref(a) that have leadig o-zero etries. Whe is a set of vectors a basis for their spa? A set of vectors forms a basis their spa if ad oly if the vectors are liearly idepedet. Give a set of liearly idepedet vectors,, 1, sppose we apply Gram-Schmidt. Is the spa of the orthoormal set idetical to the spa of the origial set? Yes.

Liear operators U V A A A, what properties mst A have for A to be described as liear? 1 1 for all vectors 1, U ad A A for all vectors U. That is, if a vector-space operatio is performed first i U after which A is applied is the same as if A is applied ad the vector-space operatio is the performed i V. U V is a liear mappig, which is the domai, the co-domai ad the rage of A? The domai of A is U ad the co-domai of A is V. The rage of A is the collectio of all images A of vectors U. U V is a liear mappig, is the rage a sbspace of V? If v1, v V that are i the rage of A, there mst exist 1, U sch that A1 v 1 ad A v. By the liear properties of A, A A A 1 1 v v, ad becase U is a vector space, 1 U, ths, 1 1 1 A v v is also i the rage of A. Ths, the rage is a sbspace of V. U V is a liear mappig, what is the image of 0 U? As 0 U = 0 for ay vector i U, A0 A0 0 A 0, so A How do we fid the matrices associated with each of the row operatios? Apply the row operatio i qestio to the idetity matrix: U V 0 0. U V Row operatio Addig a mltiple of oe row oto aother Swap two rows Mltiplyig a row by a o-zero scalar Physical iterpretatio shear reflectio scalig Descriptio Represetatio Effect Iverse Addig times Row i oto Row j ;i j Swappig Rows i ad j Ri j Mltiplyig Row i by R rji, R ;i j r r i, i j, j r r i, j j, i 0 1 Ri j R ;i rii, 1 ;i R

Nll space ad rage of fiite-dimesioal liear mappigs m, fid the dimesio of ad a basis for the rage. Give A, fid the row-eqivalet matrix ref(a) i row-echelo form. The dimesio of the rage is the rak of A. A basis for the rage are those colms of A that correspod to colms i ref(a) that have leadig o-zero etries. m, fid the dimesio ad a basis for the ll space. Give A, fid the row-eqivalet matrix ref(a) i row-echelo form. The mber of free variables eqals the dimesio of the ll space, ad to fid a basis for the ll space, solve A 0, which is eqivalet to solvig ref m A 0, which ca be solved sig backward sbstittio. Give A: U V, if A v ad A 0 V 0, the A By the properties of liearity, v. 0 A A A A A v 0 v 0 v. m 0 0 0 V V m Give A:, arge that the dimesio of the ll space pls the dimesio of the rage always eqals. Give A, fid the row-eqivalet matrix ref(a) that is i echelo form. Every colm i ref(a) that has a leadig o-zero etry adds oe dimesio to the rage, ad every colm i ref(a) that does ot have a leadig o-zero etry adds aother free variable, ad ths adds oe dimesio to the ll space.

Oe-to-oe ad oto Whe is a liear mappig oe-to-oe? Whe every vector i the rage has a iqe pre-image. Whe is a liear mappig oto? Whe every vector i the co-domai has at least oe pre-image. Whe is a liear mappig oe-to-oe ad oto? Whe every vector i the co-domai has a iqe pre-image. m, what are tests for either oe-to-oe oto oto? If ref(a) has o free variables, A is oe-to-oe. If there is oe or more free variables, A is ot oe-to-oe, it is mayto-oe. If rak(a) = m, the mappig is oto. If rak(a) < m, the image of U is a sbspace of V ot eqal to V. m, are there cases whe A is ot oe-to-oe or oto? If > m, A ca ever be oe-to-oe, bt it may be oto if rak(a) = m. If = m, A is either oe-to-oe ad oto, or either. It caot be oe bt ot the other. If < m, A ca ever be oto, bt it may be oe-to-oe if rak(a) =.

Matrices What are the diagoal etries of a m matrix? The diagoal etries of a matrix A are all etries a i,i where i = 1,, mi{m, }. Whe is a matrix pper triaglar? Whe is it lower triaglar? Whe is it diagoal? A matrix is pper triaglar is all etries below the diagoal are zero. A matrix is lower triaglar if all etries to the right of the diagoal are zero. A matrix is diagoal if all the etries off of the diagoal are zero. Diagoal matrices are the oly matrices that are simltaeosly both lower ad pper triaglar. is a permtatio matrix, describe its properties. A matrix is a permtatio matrix if ad oly if every row has exactly oe 1 ad each colm has exactly oe 1 ad all other etries are 0. is a permtatio matrix, what is the reslt of A for? If a i,j = 1, this moves the j th etry of to the i th etry. is a permtatio matrix, what is the iverse. The iverse of a permtatio matrix is its traspose.

vector m v. m ad yo have the PLU decompositio of A with A = PLU, how do yo solve A = v for a give target A = PLU, so we are solvig PLU = v. Mltiply both sides by the iverse (traspose) of P to get P T PLU = Id m LU = LU = P T v. Now, (LU) = L(U), so this is eqivalet to solvig L(U) = P T v. As is kow, so is U, so let s represet the kow U by y; that is, y = U. Ths, we have the system of liear eqatios represeted by Ly = P T v. The agmeted matrix of this system of liear eqatios is L T P v, ad as L is lower triaglar, we may se forward sbstittio to solve for y. Now that we have y, we ow are solvig the system of liear eqatios represeted by y = U, so the agmeted matrix of this system of liear eqatios is U y. As U is pper triaglar, we may se backward sbstittio to fid.

The determiat, the trace ad the iverse Give AB, :, arge that det(ba) = det(b) det(a). Give a regio R with a fiite ad o-zero volme vol(r), AR is the regio comprised of the image of each vector i R, ad by defiitio, vol(a(r)) = det(a) vol(r). Next, if B AR is the regio comprised of the images of each vector i AR, vol(b(a(r))) = det(b) vol(a(r)) = det(b) det(a) vol(r). Bt B AR BAR, ad therefore det(ba) = det(b) det(a). Give A: where A is either pper triaglar, lower triaglar or diagoal, fid the determiat of A. Mltiply the diagoal etries of A., fid the determiat of A. If = or 3, we may se the short-cts we leared i class. Otherwise, for > 3, give A, fid the PLUdecompositio of A. Record the mber P of row swaps that were reqired to prodce P ad mltiply the determiat of U by 1 P. Give A: tr A., fid the trace of A deoted The trace of A is the sm of the diagoal etries of A., approximate det Id A. or sfficietly small, det Id A 1 tr A.

id ad approximate the determiats of 1. 0.1 A 0. 0.9 ad 1. 0.1 0.3 B 0. 0.9 0.. 0. 0.1 1.1 det A 1. 0.9 0.1 0. 1.1 ; ad 1. 0.1 1 0 1 0.1 0. 0.9 0 1 1, so det A 1 0.11 1.1. det B 1.13 ; ad 1 0 0 1 3 B 0 1 0 0.1 1 0 0 1 1 1, so det B 10.1 1.. Show that Id Id 1. IdId Id, so Id Id 1. Show that if A is ivertible, the 1 1 A A AA Id. 1 1 1 1 1 By the properties of operator compositio ad iverses, A A A A A AA, ad therefore AA Id 1. BA A B. Show that if A ad B are ivertible, the 1 1 1 1 1 1 1 1 1 A B BA A B B A A IdA A A Id Usig the properties of the iverse ad matrix compositio, ad therefore 1 BA A 1 B 1. 1 Show that if A are ivertible, the A 1 A. 1 1 1 1 1 1 A 1 A AA Id Id, ad therefore A A 1.,

Adjoits of liear mappigs Recall that the adjoit of a liear mappig A: U V is that mappig A : V U sch that A, v, A v for all U ad all v V. or A: R is the cojgate traspose. m R, the adjoit is the traspose, deoted T A. or A: C m C, the adjoit U V, show that A By the properties of the adjoit, A. A A A, v, v, v, ad therefore A A If A1, A : U V., show that A A A A. 1 1 By the properties of the adjoit ad compositio of liear mappigs,, A A v A A, v A A, v A, v A, v 1 1 1 1, A v, A v, A v A v, A A v 1 1 1. Ths U A A A A. 1 1 V, show that A A. By the properties of the adjoit ad compositio of liear mappigs, A A A A A A A A, v, v, v, v, v, v, v. A. Ths

U V ad B : V W, show that BA AB. By the properties of the adjoit ad compositio of liear mappigs, Ths BA U BA BA BA A B A B A B, w, w, w, w, w, w. A B. 1 1 U ad A is ivertible, show that A A. By the properties of the adjoit ad compositio of liear mappigs,, Id, AA, A A, A, A, A A, 1 1 1 1 1 1 1 1 1 1 bt as this is tre for all 1 ad, ths 1 A A Id 1, so A A 1. or A: U V, a vector v is orthogoal to all vectors i the rage of A if ad oly if v is i the ll space of A. v is orthogoal to all vectors i the rage of A A, v for all U, A v for all U v is i the ll space of A id the liear combiatio of vectors,..., m that best approximates a vector m 1 v. If U 1 is sch that rak U rak U based o whether ref U has zero or more tha zero free variables, respectively. v, there is either a iqe soltio or ifiitely may soltios Otherwise, if rak U rak U v, we mst fid the least-sqares soltio by solvig U U U v. There is either a iqe soltio or ifiitely may soltios based o whether ref UU has zero or more tha zero free variables, respectively.

Self adjoit ad skew adjoit liear operators A liear operator A: U U is self-adjoit if A we say that A is cojgate symmetric. A liear operator A: U U is skew-adjoit if A: C C, we say that A is cojgate skew-symmetric. Give a liear operator A: U U, show that By the properties of the adjoit ad operator additio, A. R R, we say that A is symmetric. C C, A A A. R A is self-adjoit. R, we say that A is skew-symmetric. If A A A A A A A A A A,,,,,,, 1 1 1 1 1 1 1 1 ad therefore, A, A, A A, A A A A A A 1 1 1 1, so it is self-adjoit. Give a liear operator A: U U, show that By the properties of the adjoit ad operator additio, A A is skew-adjoit. A A A A A A A A A A,,,,,,, 1 1 1 1 1 1 1 1 ad therefore A A A A Give a liear operator A: U, A, A, A A, A A 1 1 1 1, so it is skew-adjoit. U, show that By the properties of the adjoit ad operator additio, ad therefore AA AA is self-adjoit. AA AA A A,,, 1 1 1 A, A, A A, A A, AA AA, so it is self-adjoit. 1 1 1 1

Show that if A is self adjoit, the A is self adjoit if ad oly if is real. A A A A A A, v, v, v, v, v, v ad A A if ad oly if which is tre if ad oly if is real. Show that if A is skew adjoit, the A is skew adjoit if ad oly if is real. A A A A A A, v, v, v, v, v, v ad A A if ad oly if which is tre if ad oly if is real. U U, show that A is the sm of a self-adjoit ad a skew-adjoit liear mappig. Becase 1 1 1 1 1 1 1 1 1 A A A A A A A A A A A A A, ad A A 1 ad A A is skew adjoit. is self adjoit

Isometric liear operators Recall that a liear operator A: U R C U is isometric if ad oly if A for all vectors U. R, the A is a matrix with colms that form a orthoormal set, ad the matrix is called orthogoal. C, the A is a matrix with colms that form a orthoormal set, ad the matrix is called itary. Show that every isometric liear operator is ivertible. V V is isometric, the Av v for all vectors, bt if A is ot ivertible, there exists a o-zero v sch that Av 0. If this was tre, Av 0 0 v, as v was assmed to be o-zero. Ths, the matrix is ivertible. Show that A: U U is isometric if ad oly if 1 A A. By the properties of isometric liear operators ad the adjoit, A is isometric A A A, A,, AA, A A A Id 1 A V U U is isometric, show that A is isometric if ad oly if 1. Becase A is isometric, A ad ths, bt by the properties of the orm, A A A A if ad oly if 1. Corollary If the field associated with U is the reals, A is isometric if ad oly if A is isometric. is isometric, show that the rows of A also form a orthoormal set., As AA A A Id, the secod says that the colms of A form a orthoormal set, bt the first says that the cojgates of the rows of A form a orthoormal set, ad if the cojgates of the rows of A form a orthormal set, the so do the rows themselves.

Arge that a permtatio matrix is isometric. A T A = Id if A is iterpreted as a real matrix, ad if A is iterpreted as a complex matrix, as all the etries are real, A A = Id, so i either case, the iverse is the adjoit, i which case, it is isometric (orthogoal for real matrices ad itary for complex matrices).

Eigevales ad eigevectors, fid the dimesio ad a basis for the eigespace correspodig to a give ad kow eigevale. Give A ad, fid the row-eqivalet matrix ref Id A. The mber of free variables eqals the dimesio of the eigespace, ad to fid a basis for the eigespace, solve Id A 0, which is eqivalet to solvig ref Id A 0, which ca be solved sig backward sbstittio. Give a matrix, fid the eigevales. If a matrix is A: eigevales. is pper triaglar, lower triaglar or diagoal, the diagoal etries of the matrix are the If A is oe of these, there is a algorithm beyod the scope of this corse, the QR algorithm, that will fid eigevales i a merically stable maer. or two ad three dimesios, det Id A is always a polyomial i with a leadig term, ad ths is of degree. Sch a polyomial mst have complex roots, ad these roots are the eigevales. Show that if A A has a eigevale, that eigevale is real. Sppose AA for a o-zero vector. I this case,,,, A A, A A A, A A, ad therefore A ad as both orms are real, so mst. Show that if A is isometric ad has a eigevale, 1. Sppose A for a o-zero vector. By the properties of a orm ad isometric liear operators, A, ad therefore 1.

A liear operator A: is ot ivertible (that is, it is siglar) if ad oly if 0 is a eigevale of A. A is ot ivertible if ad oly if it is ot oe-to-oe, if ad oly if the ll space is ot jst {0 }, if ad oly if there is a o-zero vector sch that A 0 0, Give if ad oly if 0 is a eigevale of A. 1 1 A, show that A has o real eigevectors. 1 1 If A, 1 1 1 1 1. Therefore, 1 1 1 1 1 0. The agmeted matrix 1 1 0 1 1 0 1 1 0 1 1 0 correspodig to this system of liear eqatios is ~ ~. 1 1 0 1 1 0 0 0 1 0 Becase 1, it is always tre that the oly soltio to this is. Ths, the oly vector that 0 is a scalar mltiple of itself der mltiplicatio by A is the zero vector, ad ths A has o eigevales ad o eigevectors. Give 3 A, show that A has oly oe eigevector correspodig to the eigevale 0 3 3. 0 0 As 3 Id A 0, there is oe free variable v 1, so the dimesio of the eigespace is 1, ad a 0 0 0 basis for this eigespace is fod by solvig this, amely, = 0 so eigevector correspodig to the eigevale 3. 1 1 1 0 0, so 1 0 v is the sigle

A liear operator A: ca der certai circmstaces be writte as composed of the eigevectors of A. Describe how A VDV 1 operates? A VDV 1 where V is a matrix If the matrix A has liearly idepedet eigevectors, those eigevectors form a basis of. Coseqetly, we may write ay vector = Va. To fid A, we may either solve this system of liear eqatios, or we ca fid the iverse of V to get V 1 = a. The etry a k is the coefficiet of the k th eigevector v k. Mltiplyig Da mltiplies the k th etry by k, so the k th etry of DV 1 is a k k. Mltiplyig this vector by the matrix V calclates the liear combiatio of eigevectors k 1 a v, which is the reslt of mltiplyig A. k k k Uder which circmstaces ca we write A VDV 1 as A VDV? A symmetric matrix A: R R has real eigevales ad orthogoal eigevectors. We ca therefore ormalize these eigevectors ad prodce a orthogoal matrix V, so orthogoally diagoalizable, as V is a orthogoal matrix. A ormal matrix A: C A VDV T. We say that a symmetric matrix is C has complex eigevales ad orthogoal eigevectors. We ca therefore ormalize these eigevectors ad prodce a itary matrix V, so A VDV. We say that a ormal matrix is itarily diagoalizable, as V is a itary matrix. If we ca write A VDV, how ca we easily calclate 1000 A? By the properties of isometric liear operators (sqare matrices) ad operator compositio (matrix-matrix mltiplicatio), we have A As D is a diagoal matrix with etries 1000 VDV VDV VDV VDV VDV VDV 1000 1000 times V D V V D V V DV V DIdDIdDV V DDD VD 999 times V 1000 999 times performed o a compter with a sigle fctio call. 999 times VDId DV D DV VD V V DV k, D 1000 is a diagoal matrix with etries 1000 k, a operatio that ca be

R calclate R is ot symmetric bt still diagoalizable, why may we have isses whe sig a similar techiqe to 1000 1000 1 A VD V? If we are calclatig the iverse, withot more iformatio, this operatio may be merically stable.