Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Similar documents
Linear Algebra Massoud Malek

INNER PRODUCT SPACE. Definition 1

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Section 7.5 Inner Product Spaces

LINEAR ALGEBRA W W L CHEN

Fourier and Wavelet Signal Processing

MTH 2032 SemesterII

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Math 290, Midterm II-key

Vectors in Function Spaces

Inner Product Spaces

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Vectors. Vectors and the scalar multiplication and vector addition operations:

Math Linear Algebra

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Math 24 Spring 2012 Sample Homework Solutions Week 8

Applied Linear Algebra in Geoscience Using MATLAB

Math Linear Algebra II. 1. Inner Products and Norms

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Introduction to Signal Spaces

Lecture 23: 6.1 Inner Products

Chapter 6: Orthogonality

Linear Analysis Lecture 5

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

2. Review of Linear Algebra

Linear Algebra. Session 12

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1. General Vector Spaces

There are two things that are particularly nice about the first basis

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Lecture 20: 6.1 Inner Products

Chapter 4 Euclid Space

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Typical Problem: Compute.

Polar Form of a Complex Number (I)

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Lecture 10: Vector Algebra: Orthogonal Basis

Recall that any inner product space V has an associated norm defined by

Math 3191 Applied Linear Algebra

Chapter 5. Basics of Euclidean Geometry

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

The Gram-Schmidt Process 1

The following definition is fundamental.

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Functional Analysis Exercise Class

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Inner Product Spaces

6. Orthogonality and Least-Squares

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Mathematics Department Stanford University Math 61CM/DM Inner products

Linear Algebra Highlights

The Gram Schmidt Process

The Gram Schmidt Process

6 Inner Product Spaces

Chapter 6 Inner product spaces

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

6.1. Inner Product, Length and Orthogonality

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2.

MTH 2310, FALL Introduction

Linear Algebra. Alvin Lin. August December 2017

(v, w) = arccos( < v, w >

Orthogonality and Least Squares

MTH 309Y 37. Inner product spaces. = a 1 b 1 + a 2 b a n b n

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Solutions to Review Problems for Chapter 6 ( ), 7.1

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Lecture 1.4: Inner products and orthogonality

and u and v are orthogonal if and only if u v = 0. u v = x1x2 + y1y2 + z1z2. 1. In R 3 the dot product is defined by

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

Inner Product Spaces 6.1 Length and Dot Product in R n

Best approximation in the 2-norm

Definitions for Quizzes

(v, w) = arccos( < v, w >

Projection Theorem 1

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Last week we presented the following expression for the angles between two vectors u and v in R n ( ) θ = cos 1 u v

Further Mathematical Methods (Linear Algebra) 2002

Applied Linear Algebra in Geoscience Using MATLAB

MATH 235: Inner Product Spaces, Assignment 7

(v, w) = arccos( < v, w >

10 Orthogonality Orthogonal subspaces

Introduction to Linear Algebra, Second Edition, Serge Lange

Exercise Sheet 1.

Transcription:

Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v = v, u (Symmetry) 2. u + v, w = u, w + v, w (Additivity) 3. k u, v = k u, v (Homogeneity) 4. v, v 0 and v, v = 0 iff v = 0 (Positivity) Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: o, v = v, o = 0 u v, w = u, w v, w u, v + w = u, v + u, w u, v w = u, v u, w u, k v = k u, v

Norm and distance in an inner product space Definition: If V is a real inner product space then we define The norm (or length) of v: v = v, v The distance between u and v: d( u, v) = u v = u v, u v Theorem (basic properties): Given vectors u, v in an inner product space V, and a scalar k, the following properties hold: v 0 and v = 0 iff v = 0. k v = k v d( u, v) = d( v, u) d( u, v) 0 and d( u, v) = 0 iff u = v.

Angle between vectors Theorem (Cauchy-Schwarz): If u and v are vectors in an inner vector space, then u, v u v Definition: The angle between two vectors u and v in an inner vector space is defined as θ = cos 1 u, v u v Theorem (the triangle inequality): If u, v and w are vectors in an inner vector space, then u + v u + v d(u, v) d(u, w) + d(w, v)

Orthogonality Definition: Two vectors u and v in an inner vector space are called orthogonal if u, v = 0. Clearly u v iff the angle between them is θ = π 2. Theorem (the Pythagorean theorem): If u and v are orthogonal vectors in an inner vector space, then u + v 2 = u 2 + v 2 Definition: Let W be a subspace of an inner product space V. The set of vectors in V which are orthogonal to every vector in W is called the orthogonal complement of W and it is denoted by W. Theorem: The orthogonal complement has the following properties: W is a subspace of V. W W = { o}. If V has finite dimension then (W ) = W.

Orthogonal sets, orthonormal sets Let (V, ) be an inner product space and let S be a set of vectors in V. Definition: The set S is called orthogonal if any two vectors in S are orthogonal. The set S is called orthonormal if it is orthogonal and any vector in S has norm 1. Theorem: Every orthogonal set of nonzero vectors is linearly independent. Definition: A set of vectors S is called an orthogonal basis (OGB) for V if S is a basis and an orthogonal set (that is, S is a basis where all vectors are perpendicular). A set of vectors S is called an orthonormal basis (ONB) for V if S is a basis and an orthonormal set (that is, S is a basis where all vectors are perpendicular and have norm 1).

Orthogonal sets, orthonormal sets Let (V, ) be an inner product space. Theorem: If S = {v 1, v 2,..., v n } is an orthogonal basis in V and u is any vector in V, then u = u, v 1 v 1 2 v 1 + u, v 2 v 2 2 v 2 +... + u, v n v n 2 v n If S = {v 1, v 2,..., v n } is an orthonormal basis in V and u is any vector in V, then u = u, v 1 v 1 + u, v 2 v 2 +... + u, v n v n

Projection onto a subspace Let (V, ) be an inner product space. Let W be a finite dimensional subspace. Theorem: If S = {v 1, v 2,..., v r } is an orthogonal basis in W and u is any vector in V, then proj W u = u, v 1 v 1 2 v 1 + u, v 2 v 2 2 v 2 +... + u, v r v r 2 v r If S = {v 1, v 2,..., v r } is an orthonormal basis in W and u is any vector in V, then proj W u = u, v 1 v 1 + u, v 2 v 2 +... + u, v r v r

Gram-Schmidt process Theorem: Every nonzero finite dimensional inner product space has an orthonormal basis. Given a basis {u 1, u 2,..., u n }, to find an orthogonal basis {v 1, v 2,..., v n } we use the following procedure: Step 1. v 1 = u 1 Step 2. v 2 = u 2 u 2,v 1 v 1 2 v 1 Step 3. v 3 = u 3 u 3,v 1 v 1 2 v 1 u 3,v 2 v 2 2 v 2 Step 4. v 4 = u 4 u 4,v 1 v 1 2 v 1 u 4,v 2 v 2 2 v 2 u 4,v 3 v 3 2 v 3 and so on for n steps, where n = dim(v ). To obtain an orthonormal basis, we simply normalize the orthogonal basis obtained above.

Formulation of the least squares problem Given an inconsistent system A x = b, find a vector x that comes as close as possible to being a solution. In other words: find a vector x that minimizes the distance beyween b and A x that is, a vector that minimizes b Ax (with respect to the Euclidian inner product). We call such a vector x a least squares solution to the system A x = b. We call b Ax the corresponding least squares vector and b Ax the corresponding least squares error. Theorem: If x is a least squares solution to the inconsistent system A x = b, and if W is the column space of A, then x is a solution to the consistent system A x = proj W b Note: The above theorem is not always practical, because finding the orthogonal projection proj W b may take time (by using Gram-Schmidt).

Solution of the least squares problem Theorem: For every inconsistent system A x = b, the associated normal system A T A x = A T b is consistent and its solutions are least squares solutions of A x = b. Moreover, if W is the column space of A and if x is such a least squares solution to A x = b, then proj W b = A x Theorem: For an inconsistent system A x = b the following statements are equivalent: a) There is a unique least squares solution. b) The columns of A are linearly independent. c) The matrix A T A is invertible. Theorem: If an inconsistent system A x = b has a unique least squares solution, then it can be computed as x = (A T A) 1 A T b

Function approximation Problem: Given a function f on the interval [a, b], a subspace W of C[a, b], find the best approximation of f by a function g in W. Best approximation is meant as minimizing the mean square error, where mean square error = b a [f (x) g(x)] 2 dx If we consider the (standard) inner product on C[a, b], defined by f 1, f 2 = b a f 1 (x) f 2 (x) dx and the corresponding norm, then it is easy to see that mean square error = f g, f g = f g 2 Therefore, the approximation problem can be reformulated as: find a function in W that minimizes f g 2. Solution: The best approximation of f by a function in W is g = proj W f

Fourier series We want to approximate functions by trigonometric polynomials of a certain order. In this case, the subspace W is T n, the set of all trigonometric polynomials of order n. By definition, T n = span {1, cos x, cos 2x,..., cos nx, sin x, sin 2x,..., sin nx} The trigonometric functions above that span T n are orthogonal, so the set {1, cos x, cos 2x,..., cos nx, sin x, sin 2x,..., sin nx} forms an orthogonal basis in T n. Therefore, to compute proj W f, we can use the formula on the slide Projection onto a subspace and we get: f (x) proj Tn f = a 0 2 + [a 1 cos x + a 2 cos 2x +... + a n cos nx] + [b 1 sin x + b 2 sin 2x +... + b n sin nx] where for k = 0, 1,..., n, the numbers a k and b k are called the Fourier coefficients of f and they are computed as a k = 1 π 2π 0 f (x) cos kx dx b k = 1 π 2π 0 f (x) sin kx dx