Elements of linear algebra

Similar documents
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 115A: SAMPLE FINAL SOLUTIONS

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Consequences of Orthogonality

Linear Independence. Linear Algebra MATH Linear Algebra LI or LD Chapter 1, Section 7 1 / 1

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

Linear Combination. v = a 1 v 1 + a 2 v a k v k

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

Functional Analysis Review

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Linear Algebra Practice Problems

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

Eigenvalues and Eigenvectors

Sturm-Liouville operators have form (given p(x) > 0, q(x)) + q(x), (notation means Lf = (pf ) + qf ) dx

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra Review

The following definition is fundamental.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Proofs for Quizzes. Proof. Suppose T is a linear transformation, and let A be a matrix such that T (x) = Ax for all x R m. Then

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

2. Every linear system with the same number of equations as unknowns has a unique solution.

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Eigenvalues and Eigenvectors

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Row Space, Column Space, and Nullspace

Math 115A: Homework 5

Definitions for Quizzes

235 Final exam review questions

1 Invariant subspaces

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Homework 11 Solutions. Math 110, Fall 2013.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Solutions to Review Problems for Chapter 6 ( ), 7.1

Eigenvalues and Eigenvectors A =

The set of all solutions to the homogeneous equation Ax = 0 is a subspace of R n if A is m n.

CS 143 Linear Algebra Review

6 Inner Product Spaces

Mathematical foundations - linear algebra

Linear Algebra V = T = ( 4 3 ).

Lecture 1: Review of linear algebra

Lecture 1: Systems of linear equations and their solutions

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Econ Slides from Lecture 7

Linear Algebra - Part II

Calculating determinants for larger matrices

Linear Algebra Practice Problems

MATH 583A REVIEW SESSION #1

Mathematical Formulation of the Superposition Principle

Quizzes for Math 304

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

MTH 2310, FALL Introduction

Linear Operators, Eigenvalues, and Green s Operator

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

MATH Linear Algebra

Review problems for MA 54, Fall 2004.

Exam questions with full solutions

Math 346 Notes on Linear Algebra

Linear Algebra Review. Vectors

LINEAR ALGEBRA W W L CHEN

Math Linear Algebra II. 1. Inner Products and Norms

Fall 2016 MATH*1160 Final Exam

1 Inner Product and Orthogonality

CAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1

MATH 221, Spring Homework 10 Solutions

Linear Algebra. Session 12

Math 2331 Linear Algebra

MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization.

1. Let A = (a) 2 (b) 3 (c) 0 (d) 4 (e) 1

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

MATH 304 Linear Algebra Lecture 15: Linear transformations (continued). Range and kernel. Matrix transformations.

Review and problem list for Applied Math I

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Upon successful completion of MATH 220, the student will be able to:

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Multiple Choice Questions

Solutions to Final Exam

Review of linear algebra

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Knowledge Discovery and Data Mining 1 (VO) ( )

Math 21b: Linear Algebra Spring 2018

Linear Algebra. and

Mathematics Department Stanford University Math 61CM/DM Inner products

Math 302 Outcome Statements Winter 2013

Review of Linear Algebra

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Second Order Elliptic PDE

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Transcription:

Elements of linear algebra

Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v 1 + c 2 v 2 +... + c k v k is also a member of S. The set of all such linear combinations is the span of v 1, v 2,..., v k. If c R, get real vector space; if c C, complex vector space.

Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v 1 + c 2 v 2 +... + c k v k is also a member of S. The set of all such linear combinations is the span of v 1, v 2,..., v k. If c R, get real vector space; if c C, complex vector space. A set of elements v 1, v 2,..., v k is linearly independent if any one element is not the linear combination of the others. Informally, this says that a linearly dependent set is redundant, in the sense that some subset would have exactly the same span.

Linear operators in finite dimensions. Suppose A is a n n matrix, and v is a n-dimensional vector. The matrix-vector product y = Av can be regarded as a mapping v Av.

Linear operators in finite dimensions. Suppose A is a n n matrix, and v is a n-dimensional vector. The matrix-vector product y = Av can be regarded as a mapping v Av. This mapping is a linear transformation or linear operator. Conversely, every linear mapping from R n R n is represented by a matrix vector product.

Linear operators in finite dimensions. Suppose A is a n n matrix, and v is a n-dimensional vector. The matrix-vector product y = Av can be regarded as a mapping v Av. This mapping is a linear transformation or linear operator. Conversely, every linear mapping from R n R n is represented by a matrix vector product. Linearity property: a transformation of a linear combination is the linear combination of the linear transformations. For the matrix A this means that A(c 1 v 1 + c 2 v 2 ) = c 1 Av 1 + c 2 Av 2.

Linear operators in finite dimensions. Suppose A is a n n matrix, and v is a n-dimensional vector. The matrix-vector product y = Av can be regarded as a mapping v Av. This mapping is a linear transformation or linear operator. Conversely, every linear mapping from R n R n is represented by a matrix vector product. Linearity property: a transformation of a linear combination is the linear combination of the linear transformations. For the matrix A this means that A(c 1 v 1 + c 2 v 2 ) = c 1 Av 1 + c 2 Av 2. More generally: if f 1, f 2 are elements of a vector space S, then a linear operator L is a mapping S S so that L(c 1 f 1 + c 2 f 2 ) = c 1 Lf 1 + c 2 Lf 2.

Eigenvalues and eigenvectors in finite dimensions Recall that v 0, λ is an eigenvector-eigenvalue pair of n n matrix A if Av = λv.

Eigenvalues and eigenvectors in finite dimensions Recall that v 0, λ is an eigenvector-eigenvalue pair of n n matrix A if Av = λv. Usually there are exactly n linearly independent eigenvectors v 1, v 2,..., v n corresponding to eigenvalues λ 1, λ 2,..., λ n This is useful since eigenvectors form a basis x = c 1 v 1 + c 2 v 2 +... + c n v n, for any x R

Eigenvalues and eigenvectors in finite dimensions Recall that v 0, λ is an eigenvector-eigenvalue pair of n n matrix A if Av = λv. Usually there are exactly n linearly independent eigenvectors v 1, v 2,..., v n corresponding to eigenvalues λ 1, λ 2,..., λ n This is useful since eigenvectors form a basis x = c 1 v 1 + c 2 v 2 +... + c n v n, for any x R Applying A to the vector x Ax =A(c 1 v 1 + c 2 v 2 +... + c n v n ) =c 1 Av 1 + c 2 Av 2 +... + c n Av n =c 1 λ 1 v 1 + c 2 λ 2 v 2 +... + c n λv n. Thus, eigenvectors decompose a linear operator into a linear combination.

Inner products Motivation: attach geometric ideas to vector spaces.

Inner products Motivation: attach geometric ideas to vector spaces. An inner product is a map S S R (or S S C), written v 1, v 2, with properties 1 Symmetry: v, u = u, v for every u, v S (Complex version: v, u = u, v ) 2 Linearity in first variable: for any vector v and vectors v 1, v 2,..., v n we have c 1 v 1 +c 2 v 2 +...+c n v n, v = c 1 v 1, v +c 2 v 2, v +...+c n v n, v. 3 Positivity: v, v > 0 unless v = 0.

Inner products Motivation: attach geometric ideas to vector spaces. An inner product is a map S S R (or S S C), written v 1, v 2, with properties 1 Symmetry: v, u = u, v for every u, v S (Complex version: v, u = u, v ) 2 Linearity in first variable: for any vector v and vectors v 1, v 2,..., v n we have c 1 v 1 +c 2 v 2 +...+c n v n, v = c 1 v 1, v +c 2 v 2, v +...+c n v n, v. 3 Positivity: v, v > 0 unless v = 0. v, v = v 2 = length squared. v 1, v 2 = 0 means v 1, v 2 are orthogonal

Adjoint operators For a linear operator L, the adjoint L with respect to a given inner product is some other operator L so that Lv 1, v 2 = v 1, L v 2, for all v 1, v 2

Adjoint operators For a linear operator L, the adjoint L with respect to a given inner product is some other operator L so that Lv 1, v 2 = v 1, L v 2, for all v 1, v 2 Example: dot product v 1, v 2 = v 1 v 2 is an inner product. Consider linear operator be defined by a matrix Lv = Av.

Adjoint operators For a linear operator L, the adjoint L with respect to a given inner product is some other operator L so that Lv 1, v 2 = v 1, L v 2, for all v 1, v 2 Example: dot product v 1, v 2 = v 1 v 2 is an inner product. Consider linear operator be defined by a matrix Lv = Av. By the usual rules of matrix multiplication Av 1, v 2 = (Av 1 ) v 2 = v T 2 Av 1 = (v T 2 Av 1 ) T = v T 1 A T v 2 = v 1 (A T v 2 ) = v 1, A T v 2, So that A T represents the adjoint operator.

Adjoint operators For a linear operator L, the adjoint L with respect to a given inner product is some other operator L so that Lv 1, v 2 = v 1, L v 2, for all v 1, v 2 Example: dot product v 1, v 2 = v 1 v 2 is an inner product. Consider linear operator be defined by a matrix Lv = Av. By the usual rules of matrix multiplication Av 1, v 2 = (Av 1 ) v 2 = v T 2 Av 1 = (v T 2 Av 1 ) T = v T 1 A T v 2 = v 1 (A T v 2 ) = v 1, A T v 2, So that A T represents the adjoint operator. Caution: there are many different inner products, and the adjoint depends on them.

Self-adjointness If an operator is own adjoint, it is called self-adjoint.

Self-adjointness If an operator is own adjoint, it is called self-adjoint. Consider a linear operator represented by a matrix A. If it is self-adjoint, then 1 The eigenvalues of A are real. 2 The eigenvectors of A are orthogonal to one another. Similar statements can be made for eigenvalue problems of differential operators.

Orthogonal expansions Let v 1, v 2,..., v n be eigenvectors of a self-adjoint matrix A. The linear combination x = c 1 v 1 + c 2 v 2 +... + c n v n is called an orthogonal expansion.

Orthogonal expansions Let v 1, v 2,..., v n be eigenvectors of a self-adjoint matrix A. The linear combination x = c 1 v 1 + c 2 v 2 +... + c n v n is called an orthogonal expansion. Important question: given x, how to find c 1, c 2, c 3,...?

Orthogonal expansions Let v 1, v 2,..., v n be eigenvectors of a self-adjoint matrix A. The linear combination x = c 1 v 1 + c 2 v 2 +... + c n v n is called an orthogonal expansion. Important question: given x, how to find c 1, c 2, c 3,...? Take inner product any particular eigenvector v k x, v k = c 1 v 1 + c 2 v 2 +... + c n v n, v k = c 1 v 1, v k + c 2 v 2, v k +... + c 1 v n, v k = c k v k, v k Thus c k = x, v k v k, v k.

Orthogonal expansions Let v 1, v 2,..., v n be eigenvectors of a self-adjoint matrix A. The linear combination x = c 1 v 1 + c 2 v 2 +... + c n v n is called an orthogonal expansion. Important question: given x, how to find c 1, c 2, c 3,...? Take inner product any particular eigenvector v k Thus x, v k = c 1 v 1 + c 2 v 2 +... + c n v n, v k = c 1 v 1, v k + c 2 v 2, v k +... + c 1 v n, v k = c k v k, v k c k = x, v k v k, v k. Remark: If eigenvectors are normalized, v k, v k = 1 and c k = x, v k

Differential linear operators A differential operator is a linear operator composed of derivatives, which acts on a vector space of functions.

Differential linear operators A differential operator is a linear operator composed of derivatives, which acts on a vector space of functions. Example 1: d 2 /dx 2 maps f (x) to f (x), and satisfies linearity d 2 dx 2 ( c 1 f 1 (x) + c 2 f 2 (x) ) = c 1 d 2 f 1 (x) dx 2 + c 2 d 2 f 2 (x) dx 2 = c 1 Lf 1 + c 2 Lf 2.

Differential linear operators A differential operator is a linear operator composed of derivatives, which acts on a vector space of functions. Example 1: d 2 /dx 2 maps f (x) to f (x), and satisfies linearity d 2 ( ) d 2 f 1 (x) d 2 f 2 (x) dx 2 c 1 f 1 (x) + c 2 f 2 (x) = c 1 dx 2 + c 2 dx 2 = c 1 Lf 1 + c 2 Lf 2. Example 2: Lf = ( g(x)f (x) ) is linear, since L(c 1 f 1 (x) + c 2 f 2 (x)) = (g(x)[c 1 f 1 (x) + c 2 f 2 (x)] )

Differential linear operators A differential operator is a linear operator composed of derivatives, which acts on a vector space of functions. Example 1: d 2 /dx 2 maps f (x) to f (x), and satisfies linearity d 2 ( ) d 2 f 1 (x) d 2 f 2 (x) dx 2 c 1 f 1 (x) + c 2 f 2 (x) = c 1 dx 2 + c 2 dx 2 = c 1 Lf 1 + c 2 Lf 2. Example 2: Lf = ( g(x)f (x) ) is linear, since L(c 1 f 1 (x) + c 2 f 2 (x)) = (g(x)[c 1 f 1 (x) + c 2 f 2 (x)] ) = (g(x)[c 1 f 1 (x) + c 2 f 2 (x)]) = c 1 (g(x)f 1 (x)) + c 2 (g(x)f 2 (x)).

Differential linear operators A differential operator is a linear operator composed of derivatives, which acts on a vector space of functions. Example 1: d 2 /dx 2 maps f (x) to f (x), and satisfies linearity d 2 ( ) d 2 f 1 (x) d 2 f 2 (x) dx 2 c 1 f 1 (x) + c 2 f 2 (x) = c 1 dx 2 + c 2 dx 2 = c 1 Lf 1 + c 2 Lf 2. Example 2: Lf = ( g(x)f (x) ) is linear, since L(c 1 f 1 (x) + c 2 f 2 (x)) = (g(x)[c 1 f 1 (x) + c 2 f 2 (x)] ) = (g(x)[c 1 f 1 (x) + c 2 f 2 (x)]) = c 1 (g(x)f 1 (x)) + c 2 (g(x)f 2 (x)). Technical point: Set of functions which constitute a vector space matters. For example, adjoints, eigenvalues, etc. depend on boundary conditions which define which space is being used.

Differential linear operators A differential operator is a linear operator composed of derivatives, which acts on a vector space of functions. Example 1: d 2 /dx 2 maps f (x) to f (x), and satisfies linearity d 2 ( ) d 2 f 1 (x) d 2 f 2 (x) dx 2 c 1 f 1 (x) + c 2 f 2 (x) = c 1 dx 2 + c 2 dx 2 = c 1 Lf 1 + c 2 Lf 2. Example 2: Lf = ( g(x)f (x) ) is linear, since L(c 1 f 1 (x) + c 2 f 2 (x)) = (g(x)[c 1 f 1 (x) + c 2 f 2 (x)] ) = (g(x)[c 1 f 1 (x) + c 2 f 2 (x)]) = c 1 (g(x)f 1 (x)) + c 2 (g(x)f 2 (x)). Technical point: Set of functions which constitute a vector space matters. For example, adjoints, eigenvalues, etc. depend on boundary conditions which define which space is being used. Some function space notation: C k (Ω) = space of k-times continuously differentiable functions with domain Ω. C0 (Ω) = space of infinitely differentiable functions whose value on the boundary of Ω are zero.

The superposition principle All linear differential equations for unknown f have form Lf = 0, (homogeneous equations) Lf = g, (inhomogeneous equations)

The superposition principle All linear differential equations for unknown f have form Lf = 0, (homogeneous equations) Lf = g, (inhomogeneous equations) Example: The heat equation u t = D u can be written ( / t )u = 0.

The superposition principle All linear differential equations for unknown f have form Lf = 0, (homogeneous equations) Lf = g, (inhomogeneous equations) Example: The heat equation u t = D u can be written ( / t )u = 0. Superposition principle for homogeneous equations: If f 1, f 2,... solve Lf = 0, so does a linear combination since L(c 1 f 1 + c 2 f 2 +...) = c 1 Lf 1 + c 2 Lf 2 +... = 0.

The superposition principle All linear differential equations for unknown f have form Lf = 0, (homogeneous equations) Lf = g, (inhomogeneous equations) Example: The heat equation u t = D u can be written ( / t )u = 0. Superposition principle for homogeneous equations: If f 1, f 2,... solve Lf = 0, so does a linear combination since L(c 1 f 1 + c 2 f 2 +...) = c 1 Lf 1 + c 2 Lf 2 +... = 0. Superposition principle for inhomogeneous equations: If f p is a particular solution solving Lf p = g(x) then h = f f p solves a homogeneous equation Lh = 0.

The superposition principle All linear differential equations for unknown f have form Lf = 0, (homogeneous equations) Lf = g, (inhomogeneous equations) Example: The heat equation u t = D u can be written ( / t )u = 0. Superposition principle for homogeneous equations: If f 1, f 2,... solve Lf = 0, so does a linear combination since L(c 1 f 1 + c 2 f 2 +...) = c 1 Lf 1 + c 2 Lf 2 +... = 0. Superposition principle for inhomogeneous equations: If f p is a particular solution solving Lf p = g(x) then h = f f p solves a homogeneous equation Lh = 0. Strategy for solving inhomogeneous equations: (1) Find particular solution, (2) find general solution Lh = 0, (3) total solution is f = f p + h.

The superposition principle and boundary conditions Even for linear, homogeneous equations, not every linear combination of solution satisfies the boundary or initial conditions!

The superposition principle and boundary conditions Even for linear, homogeneous equations, not every linear combination of solution satisfies the boundary or initial conditions! A side condition is linear, homogeneous if it can be written Bf = 0, where B is a linear operator

The superposition principle and boundary conditions Even for linear, homogeneous equations, not every linear combination of solution satisfies the boundary or initial conditions! A side condition is linear, homogeneous if it can be written Bf = 0, where B is a linear operator For Dirichlet b.c., B is the restriction operator

The superposition principle and boundary conditions Even for linear, homogeneous equations, not every linear combination of solution satisfies the boundary or initial conditions! A side condition is linear, homogeneous if it can be written Bf = 0, where B is a linear operator For Dirichlet b.c., B is the restriction operator For Neumann b.c., B takes a normal derivative before restriction

The superposition principle and boundary conditions Even for linear, homogeneous equations, not every linear combination of solution satisfies the boundary or initial conditions! A side condition is linear, homogeneous if it can be written Bf = 0, where B is a linear operator For Dirichlet b.c., B is the restriction operator For Neumann b.c., B takes a normal derivative before restriction Extended superposition principle: If f 1, f 2,... satisfy linear, homogeneous boundary conditions, then so does a linear combination c 1 f 1 + c 2 f 2 +...