v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Similar documents
Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality

Section 6.1. Inner Product, Length, and Orthogonality

Chapter 6. Orthogonality and Least Squares

Math 2331 Linear Algebra

Math 3191 Applied Linear Algebra

Announcements Wednesday, November 15

March 27 Math 3260 sec. 56 Spring 2018

Math Linear Algebra

Orthogonality and Least Squares

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Inner Product Spaces 6.1 Length and Dot Product in R n

Announcements Wednesday, November 15

Lecture 3: Linear Algebra Review, Part II

22m:033 Notes: 6.1 Inner Product, Length and Orthogonality

6. Orthogonality and Least-Squares

Overview. Motivation for the inner product. Question. Definition

Linear Algebra. Alvin Lin. August December 2017

Linear Algebra Review. Vectors

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Linear Algebra Massoud Malek

Vector Geometry. Chapter 5

4.1 Distance and Length

Lecture 7. Econ August 18

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

The geometry of least squares

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

Chapter 6: Orthogonality

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

The Four Fundamental Subspaces

MTH 2310, FALL Introduction

Exercise Solutions for Introduction to 3D Game Programming with DirectX 10

Inner Product Spaces 5.2 Inner product spaces

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Typical Problem: Compute.

The set of all solutions to the homogeneous equation Ax = 0 is a subspace of R n if A is m n.

Review of Linear Algebra

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Dot product and linear least squares problems

LINEAR ALGEBRA: THEORY. Version: August 12,

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21

MTAEA Vectors in Euclidean Spaces

Linear Equations and Vectors

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated )

Dot product. The dot product is an inner product on a coordinate vector space (Definition 1, Theorem

6.1. Inner Product, Length and Orthogonality

Lecture 20: 6.1 Inner Products

There are two things that are particularly nice about the first basis

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Vectors. Vectors and the scalar multiplication and vector addition operations:

Linear Algebra. 1.1 Introduction to vectors 1.2 Lengths and dot products. January 28th, 2013 Math 301. Monday, January 28, 13

Chapter 2. Vectors and Vector Spaces

Introduction to Matrix Algebra

Lecture 23: 6.1 Inner Products

The Transpose of a Vector

MATH 22A: LINEAR ALGEBRA Chapter 4

6 Inner Product Spaces

Chapter 1 Vector Spaces

Elementary linear algebra

LINEAR ALGEBRA W W L CHEN

Math Linear Algebra II. 1. Inner Products and Norms

Vectors in Function Spaces

(v, w) = arccos( < v, w >

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality

7.1 Projections and Components

Unit 2. Projec.ons and Subspaces

x 1 + 2x 2 + 3x 3 = 0 x 1 + 2x 2 + 3x 3 = 0, x 2 + x 3 = 0 x 3 3 x 3 1

Systems of Linear Equations

Linear Algebra V = T = ( 4 3 ).

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Lecture 9: Vector Algebra

Sept. 26, 2013 Math 3312 sec 003 Fall 2013

(v, w) = arccos( < v, w >

Math 3191 Applied Linear Algebra

Linear Algebra, Summer 2011, pt. 3

CSCI 239 Discrete Structures of Computer Science Lab 6 Vectors and Matrices

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

4.3 - Linear Combinations and Independence of Vectors

October 25, 2013 INNER PRODUCT SPACES

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Applied Linear Algebra in Geoscience Using MATLAB

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

Mathematical foundations - linear algebra

Definitions and Properties of R N

Answer Key for Exam #2

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

INNER PRODUCT SPACE. Definition 1

Consequences of Orthogonality

(v, w) = arccos( < v, w >

The Cross Product of Two Vectors

Math 4377/6308 Advanced Linear Algebra

MTH 2032 SemesterII

Chapter 6 - Orthogonality

Transcription:

Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 : v = v 1 2 +v 2 2 +v 3 2. By analogy then, we define the length of a vector v R n to be v = v 1 2 +v 2 2 +L+v n 2. Notice that for any scalar c, we have cv = c v (why?); in particular, if v is not the zero vector (so that v 0), we can take our constant multiplier to be c = 1/ v, whence

v v = 1 v = 1. v That is, the vector u = v is a unit vector, one v whose length equals 1. The process of scalar multiplying v by the reciprocal of its length to produce a unit vector is called normalizing v. Given two nonzero vectors u and v (in either R 2 or R 3 ), if we picture these two vectors as positioned at the origin, then the triangle having these vectors as two sides has u v (or v u) as the third side. It makes sense, therefore, to define the distance between the vectors u and v to be the length of the vector u v. The Law of Cosines, which generalizes the Pythagorean Theorem, says that any triangle with side lengths a, b and c satisfies c 2 = a 2 +b 2 2ab cosθ where θ is the angle between sides a and b (i.e., the angle opposite side c). (To prove this, introduce h, the height of the altitude on side c, and apply the Pythagorean Theorem to the two right triangles made by h and a or by h and b.)

This relation allows us to determine the angle θ made between the two vectors u and v, since we obtain u v 2 = u 2 + v 2 2 u v cosθ, 2 u v cosθ = u 2 + v 2 u v 2 = (u 1 2 +u 2 2 ) +(v 1 2 +v 2 2 ) ([u 1 v 1 ] 2 + [u 2 v 2 ] 2 ) = 2(u 1 v 1 +u 2 v 2 ) (where in R 3 we need an extra component with subscript 3 in each term above). Thus, or u v cosθ = u 1 v 1 +u 2 v 2 cosθ = u 1v 1 +u 2 v 2 u v. Again, this gives us the means to define by analogy the angle between any pair of nonzero vectors u and v in R n to be the value of θ (between 0 and π) that satisfies cosθ = u 1v 1 +u 2 v 2 +L+u n v n u v.

Notice that the formulas we have developed for the length of v and the angle between u and v, v = v 1 2 +v 2 2 +L+v n 2 cosθ = u 1v 1 +u 2 v 2 +L+u n v n u v can be simplified by making use of the fact that [ ] u T v = u 1 u 2 L u n v 1 v 2 M v n = u 1 v 1 +u 2 v 2 +L+u n v n. This expression we turn into a scalar product of the vectors u and v, also called the inner product or dot product, denoting it with a raised dot: u v = u T v = u 1 v 1 +u 2 v 2 +L+u n v n. We can now more simply write the formulas for length and angle as v = v v and cosθ = u v u v.

Theorem If u,v,w R n and c is a scalar, then (1) u v = v u [inner product is commutative]; (2) (u + v) w = u w + v w [inner product distributes over vector addition]; (3) (cu) v = c(u v) = u (cv) [inner product is compatible with scalar multiplication]; (4) u u 0, and more specifically, u u = 0 if and only if u = 0. Proof (1)-(3) are clear: u v = u 1 v 1 +u 2 v 2 +L+u n v n = v 1 u 1 +v 2 u 2 +L+v n u n = v u (u + v) w = (u 1 +v 1 )w 1 +(u 2 +v 2 )w 2 +L +(u n +v n )w n = (u 1 w 1 +u 2 w 2 +L+u n w n ) +(v 1 w 1 +v 2 w 2 +L+v n w n ) = u w + v w (cu) v = (cu 1 )v 1 + (cu 2 )v 2 +L+ (cu n )v n = c(u 1 v 1 +u 2 v 2 +L+u n v n ) = c(u v) = u 1 (cv 1 )+u 2 (cv 2 )+L+u n (cv n ) = u (cv)

For (4), we simply note that u u = u 1 2 +u 2 2 +L+u n 2 0 where equality can only occur if each of the components of u is 0. // Corollary The inner product is linear in the first factor, i.e., if u 1,u 2,,u m, v R n and c 1,c 2,,c m are scalars, then (c 1 u 1 +c 2 u 2 +L+c m u m ) v = c 1 ( u 1 v) +c 2 (u 2 v) +L+c m (u m v) (By commutativity, it follows that the inner product is linear in the second factor as well.) // In R 2 lines are perpendicular when the angle between them is a right angle. If u and v are angles in R 2, then the angle θ between them is right if and only if 0 = cosθ = u v u v u v = 0.

We use this notion to generalize the concept of perpendicularity to any Euclidean space: we say that vectors u,v R n are orthogonal if u v = 0. Theorem [The Pythagorean Theorem in R n ] Two vectors u,v R n are orthogonal if and only if Proof As u + v 2 = u 2 + v 2. u + v 2 = (u + v) (u + v) = u (u + v)+ v (u + v) = u u + u v + v u + v v = u 2 + v 2 +2u v then u,v R n are orthogonal u v = 0 u + v 2 = u 2 + v 2. //

Orthogonal Complements Orthogonality of vectors extends to a property of entire vector spaces. If W is a subspace of R n, then the set of all vectors x in R n which are orthogonal to every vector in W is called the orthogonal complement of W and is denoted W (often read W perp ). This may seem to be a very hard property to check: how can one determine whether a vector x is orthogonal to every vector in the vector space W (especially since vector spaces generally contain infinitely many vectors)? Luckily, this is not a serious problem because of the Theorem A vector x is orthogonal to every vector in the vector space W if and only if it is orthogonal to each vector in a basis for W. Proof Let W have basis B = { b 1, b 2,, b m }. Then it is clear that if x is orthogonal to every vector in W, it must be orthogonal to every one of the b s. Conversely however, if x is orthogonal to every one of the b s, then since every w W has a representation of the form w = c 1 b 1 +L+ c m b m for suitable scalars c 1,,c m, we have

w x = (c 1 b 1 +L+c m b m ) x = c 1 ( b 1 x)+l+ c m (b m x ) = 0 whence x is orthogonal to every vector in W. // More important for the theory are the following properties: Theorem If W is a subspace of R n, then so is W. Proof Left as an exercise (#30, p. 383). // One important situation in which orthogonal complements of vector spaces arise naturally is one with which we are already familiar: Theorem Let A be an m n matrix. Then (Row A ) = Nul A and (Col A ) = Nul A T. Proof The vector x R n lies in Nul A if and only if Ax = 0; but this matrix equation is equivalent to the vector equations r 1 x = 0, r 2 x = 0,, r m x = 0

where r 1, r 2,, r m are the m row vectors of A, which span Row A. As a subset of these vectors form a basis for Row A, it follows that x lies in Nul A if and only if x lies in (Row A ). Similarly, the vector y R m lies in Nul A T if and only if A T y = 0 y T A = 0 T (by taking the transpose of both sides); this last matrix equation is equivalent to the vector equations y T c 1 = 0, y T c 2 = 0,L, y T c n = 0 where c 1, c 2,K, c n are the n column vectors if A, which span Col A. As a subset of these vectors form a basis for Col A, it follows that y lies in Nul A T if and only if y lies in (Col A ). //

Orthogonal and Orthonormal Sets Any finite set of vectors S = { u 1, u 2,,u k } is called an orthogonal set if every pair of vectors in S is orthogonal to each other, i.e., u i u j = 0 whenever i j. The most obvious example of an orthogonal set of vectors in R n is given by the standard basis E = {e 1,e 2,,e n }. Indeed, this leads to the following definition: a basis for a vector space is called an orthogonal basis if it is an orthogonal set of vectors. Orthogonal bases have particularly nice properties. For instance, it is easy to determine the weights needed to express a vector in terms of an orthogonal basis: Theorem If B = {u 1,u 2,, u k } is an orthogonal basis for a subspace W of R n, then the (unique) weights that express a vector w W in terms of the basis vectors in B namely, the c s in the equation w = c 1 u 1 +L+c k u k are given by the formula c i = w u i u i u i (i = 1, 2,, k). Proof Choose any i between 1 and k. Then

w u i = (c 1 u 1 +L+ c k u k ) u i = c 1 (u 1 u i ) +L+c k (u k u i ) = c i (u i u i ) from which the formula follows directly (recognizing that u i u i 0 since u i is a basis vector, hence cannot equal 0). //