Linear Algebra Massoud Malek

Similar documents
Chapter 3 Transformations

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

INNER PRODUCT SPACE. Definition 1

Diagonalizing Matrices

Linear Algebra Review. Vectors

Applied Linear Algebra in Geoscience Using MATLAB

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

There are two things that are particularly nice about the first basis

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math Linear Algebra II. 1. Inner Products and Norms

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

MATH 235: Inner Product Spaces, Assignment 7

The following definition is fundamental.

2. Review of Linear Algebra

Lecture Notes 1: Vector spaces

Linear Algebra. Session 12

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Mathematical foundations - linear algebra

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. General Vector Spaces

Elementary linear algebra

Applied Linear Algebra in Geoscience Using MATLAB

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

LINEAR ALGEBRA W W L CHEN

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Review of Some Concepts from Linear Algebra: Part 2

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

Inner Product Spaces

6. Orthogonality and Least-Squares

MTH 2032 SemesterII

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Review of Linear Algebra

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

Review problems for MA 54, Fall 2004.

Scientific Computing: Dense Linear Systems

Inner product spaces. Layers of structure:

Chapter 6: Orthogonality

MAT Linear Algebra Collection of sample exams

Lecture 3: Review of Linear Algebra

2. Every linear system with the same number of equations as unknowns has a unique solution.

Definitions for Quizzes

Lecture 3: Review of Linear Algebra

Math 24 Spring 2012 Sample Homework Solutions Week 8

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Mathematics Department Stanford University Math 61CM/DM Inner products

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Section 7.5 Inner Product Spaces

Algebra II. Paulius Drungilas and Jonas Jankauskas

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Algebra Practice Problems

Dot product and linear least squares problems

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Lecture notes: Applied linear algebra Part 1. Version 2

Math 302 Outcome Statements Winter 2013

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Conceptual Questions for Review

ELE/MCE 503 Linear Algebra Facts Fall 2018

Exercise Sheet 1.

Singular Value Decompsition

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Chap 3. Linear Algebra

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

Linear Algebra problems

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Vectors in Function Spaces

Linear Algebra Highlights

Numerical Linear Algebra

Mathematical foundations - linear algebra

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Lecture 2: Linear Algebra Review

Lecture 3: Linear Algebra Review, Part II

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Chapter 4 Euclid Space

October 25, 2013 INNER PRODUCT SPACES

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

1 Linear Algebra Problems

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

Lecture 1: Review of linear algebra

Math Linear Algebra Final Exam Review Sheet

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Section 3.9. Matrix Norm

Quantum Computing Lecture 2. Review of Linear Algebra

SUMMARY OF MATH 1600

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Transcription:

CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product is a generalization of the dot product In a vector space, it is a way to multiply vectors together, with the result of this multiplication being a scalar More precisely, for a real vector space, an inner product, satisfies the following four properties Let u, v, and w be vectors and α be a scalar, then: Linearity: u + v, w = u, w + v, w α v, w = α v, w Symmetry: v, w = w, v Positive-definiteness: v, v 0 and equal if and only if v = 0 A vector space together with an inner product on it is called an inner product space When given a complex vector space, the symmetry property above is usually replaced by Conjugate Symmetry: v, w = w, v, where w, v refers to complex conjugation With this property, the inner product is called a Hermitian inner product and a complex vector space with a Hermitian inner product is called a Hermitian inner product space An inner product space is a vector space with the additional inner product structure An inner product naturally induces an associated norm, u = u, u Thus an inner product space is also a normed vector space Examples of Inner Product Space (a) The Euclidean inner product on IF n, where IF is the set of real or complex number is defined by (u, u 2,, u n, v, v 2,, v n ) = u v + u 2 v 2 + + u n v n (b) An inner product can be defined on the vector space of smooth real-valued functions on the interval [, ] by f)x), g(x) = f(x) g(x) d x

Linear Algebra Inner Product and Normed Space Page 2 (c) An inner product can be defined on the set of real polynomials by Examples of Vector Norms p(x), q(x) = Here are three mostly used vector norms: (a) -norm: v = v i i (b) 2-norm or Euclidean norm: v 2 = v i 2 (c) -norm: v = max x i 0 p(x) q(x) e x d x i Also, note that if is a norm and M is any nonsingular square matrix, then v M v is also a norm The case where M is diagonal is particularly common in practice Note Any Hermitian positive definite matrix H induces an inner product u, v H = u H v; and thus a norm u H = u, u H In MatLab, the -norm, 2-norm, and -norm are invoked by the statements norm (u, ), norm (u, 2), and norm (u, inf), respectively The 2-norm is the default in MatLab The statement norm (u) is interpreted as the 2-norm of u in MatLab Theorem If is a norm induce by an inner product, then u = 0, if and only if u = θ, otherwise u > 0 ; 2 λ u = λ u ; 3 (Cauchy-Schwarz inequality) u, v u v ; 4 (Triangle inequality) u + v u + v, Orthogonality In an inner product vector space V, vectors u and v are orthogonal, if u, v = 0 A subset S of V is orthogonal, if any two distinct vectors of S are orthogonal A vector u is a unit vector, if u = Finally a subset S of V is orthonormal, if S is orthogonal and consists entirely of unit vectors If v θ, then the vector u = v is a unit vector; we say that v was normalized v Note The zero vector θ is orthogonal to every vector in V and θ is the only vector in V that is orthogonal to itself Pythagorean Theorem Suppose u and v are orthogonal vectors in V Then Proof we have u + v 2 = u 2 + v 2 u + v 2 = u + v, u + v = u, u + u, v + v, u + v, v = u, u 2 + v, v 2 as desired

Linear Algebra Gram-Schmidt Page 3 The Cauchy-Schwarz inequality states that for all vectors u and v of an inner product space: u, v 2 u, u v, v, where, is the inner product Equivalently, by taking the square root of both sides, and referring to the norms of the vectors, the inequality is written as u, v u v Moreover, the two sides are equal if and only if u and v are linearly dependent Proof Let u, v be arbitrary vectors in a vector space V over IF with an inner product, where IF is the field of real or complex numbers If u, v = 0, the theorem holds trivially If not, then u 0, v 0 For any λ IF, we have 0 u λ v 2 = u λ v, u λ v = u, u λ v λ v, u λ v u, u λ u, v λ v, u + λ λ v, v Choose λ = u, v v, v, then 0 u, u λ u, v λ v, u + λ λ v, v = u, u u, v 2 v, v = u 2 u, v 2 v 2 It follows that u, v 2 u, u v, v or u, v u v Examples of the Cauchy-Schwarz Inequality (a) If u, u 2,, u n, v, v 2,, v n are real numbers, then u v + u 2 v 2 + + u n v n 2 ( u 2 + u 2 2 + + u 2 n) ( v 2 + v 2 2 + + v 2 n) (b) If f(x) and g(x) are smooth real-valued functions, then f(x) g(x) d x Triangle Inequality For any vectors u and v, Proof We have 2 ( ) ( f(x) 2 d x u + v u + v g(x) 2 d x u + v 2 = u + v, u + v = u, u + u, v + v, u + v, v = u 2 + 2 Re u, v + v 2 u 2 + u, v + v 2 u 2 + 2 u v + v 2 = u 2 + v 2 )

Linear Algebra Gram-Schmidt Page 4 Gram-Schmidt Process It is often more simpler to work in an orthogonal basis Gram-Schmidt process is a method for obtaining orthonormal vectors from a linearly independent set of vectors in an inner product space, most commonly the Euclidean space IR n S = {v, v 2, v 3,, v k } and generates an orthogonal set S G = {u, u 2, u 3,, u k } or orthonormal basis S N = {e, e 2, e 3,, e k } that spans the same subspace of IR n as the set S v, u We define the projection operator by proj u (v) = u, where v, u denotes the inner u, u product of the vectors v and u This operator projects the vector v orthogonally onto the line spanned by vector u The Gram-Schmidt process then works as follows: u = v, e = u u u 2 = v 2 proj u (v 2 ), e 2 = u 2 u 2 u 3 = v 3 proj u (v 3 ) proj u2 (v 3 ), e 3 = u 3 u 3 u 4 = v 4 proj u (v 4 ) proj u2 (v 4 ) proj u3 (v 4 ), e 4 = u 4 u 4 = = k u k = v k proj uj (v k ), e k = u k u k j= The sequence {u, u 2,, u k } is the required system of orthogonal vectors, and the normalized vectors {e, e 2,, e k } form an orthonormal set The calculation of the sequence u, u 2,, u k is known as Gram-Schmidt orthogonalization, while the calculation of the sequence e, e 2,, e k is known as Gram-Schmidt orthonormalization as the vectors are normalized Example Consider the basis B = v =, v 2 =, v 3 = Then use Gram- Schmidt to find the orthogonal basis B G = { u, u 2, u 3 } and the orthonormal basis B N = { e, e 2, e 3 } associated with B u = v = ; e = u u = 3 u 2 = v 2 v 2, u u, u u = = 2 2 = u 2 = 2 ; e 2 = u 2 3 3 u 2 = 2 6 u 3 = v 3 v 3, u u, u u v 3, u 2 u 2, u 2 u 2 = + 2 = 0 ; e 3 = u 3 3 3 u 3 = 0 2 Hence B G = u =, u 2 = 2, u 3 = 0 and B N = e =, e 2 = 2, e 3 = 0 3 6 2

Linear Algebra Matrix Norms Page 5 Matrix Norms A matrix norm is an extension of the notion of a vector norm to matrices It is a vector norm on IF m n That is, if A denotes the norm of the matrix A, then, A 0; 2 A = 0 if and only if A = 0; 3 α A = α A for all α IF and A IF m n ; 4 A, B IF m n, A + B A + B Additionally, in the case of square matrices (thus, m = n), some (but not all) matrix norms satisfy the following condition, which is related to the fact that matrices are more than just vectors: 5 AB A B for all matrices A and B in IF m n A matrix norm that satisfies this additional property is called a sub-multiplicative norm Operator Norm If vector norms on IF m and IF n are given (IF is the field of real or complex numbers), then one defines the corresponding operator norm or induced norm or, on the space of m-by-n matrices as the following maxima: A = sup{ Ax : x IF n, x = } { } Ax = sup x : x IF n, x θ The operator norm corresponding to the p-norm for vectors is: Ax p A p = sup x 0 x p Note All operator norms are sub-multiplicative Here are some example matrix norms: m Column norm or -norm: A = max j n i= a ij, which is simply the maximum absolute column sum of the matrix n Row norm or infinity-norm: A = max i m j= a ij, which is simply the maximum absolute row sum of the matrix The 2-norm induced by the euclidean vector norm is A = λ max, where λ max is the largest eigenvalue of the matrix A A Although the Frobenius norm, defined as: k n A F = a ij 2 = trace (A A) i= j=

Linear Algebra Matrix Norms Page 6 is not an induced norm but it is a sub-multiplicative norm In MatLab, the Frobenius nom is invoked by the statement norm (A, fro ) The max norm A max = max a ij is neither an induced norm nor a sub-multiplicative norm The largest eigenvalue of a square matrix A in modulus, denoted by ρ (A) and called the spectral radius of A, never exceeds its operator norm: ρ (A) A p Given two normed vector spaces V and W (over the same base field, either the real numbers or the complex numbers), a linear map A : V W is continuous if and only if there exists a real number λ such that for all v V, Condition Number A v λ v The condition number associated with the linear equation A x = b, gives a bound on how inaccurate the solution x will be after approximation Note that this is before the effects of round-off error are taken into account; conditioning is a property of the matrix, not the algorithm or floating point accuracy of the computer used to solve the corresponding system Let x 0 be the solution of the linear equation A x = b and let x be the calculated solution of A x = b, so the error in the solution is e = x x 0, and the relative error in the solution is e = x x 0 x 0 Generally, we can t verify the error or relative error (we don t know x 0 ), so a possible way to check this is by testing the accuracy back in the system A x = b : and relative residual is = residual r = A x 0 A x = b b, A x 0 A x A x = b b b = r b A matrix A is said to be ill-conditioned, if relatively small changes in the input (in the matrix A) can cause large change in the output (the solution of A x = b), ie; the solution is not very accurate if input is rounded Otherwise it is well-conditioned The condition number of an invertible matrix A is defined to be κ (A) = A A This quantity is always greater than or equal to The condition number is defined more precisely to be the maximum ratio of the relative error in x divided by the relative error in b If A is nonsingular, then by knowing the eigenvalues of A A (λ λ 2 λ n > 0), we can compute the condition number using the 2-norm: κ 2 (A) = A 2 A 2 = λ λ n

Linear Algebra Matrix Norms Page 7 The smallest nonzero eigenvalue λ n of A A is a measure of how close the matrix is to being singular If λ n is small, then κ 2 (A) is large Thus, the closer the matrix is to being singular, the more ill-conditioned it is Consistent Norms A matrix norm ab on IF m n is called consistent with a vector norm a on IF n and a vector norm b on IF m, if: Ax b A ab u a, for all A IF m n, u IF n All induced norms are consistent by definition Compatible Norms A matrix norm b on IF n n is called compatible with a vector norm a on IF n if: A x a A b x a, for all A IF n n, x IF n All induced norms are compatible by definition Equivalence of Norms Two matrix norms α and β are said to be equivalent, if for all matrices A IF m n, there exist some positive numbers r and s such that Examples of Norm Equivalence r A α A β s A α For matrix A R \ of rank r, the following inequalities hold: A 2 A F r A 2 A F A r A F A max A 2 mn A max A A 2 m A n A A 2 n A m Here, A p refers to the matrix norm induced by the vector p-norm Another useful inequality between matrix norms is: A 2 A A