Vectors. Vectors and the scalar multiplication and vector addition operations:

Similar documents
Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Chapter 6: Orthogonality

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Chapter 6. Orthogonality and Least Squares

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Lecture 3: Review of Linear Algebra

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

and u and v are orthogonal if and only if u v = 0. u v = x1x2 + y1y2 + z1z2. 1. In R 3 the dot product is defined by

Review of Linear Algebra

Homework 11 Solutions. Math 110, Fall 2013.

MATH Linear Algebra

The Gram Schmidt Process

The Gram Schmidt Process

There are two things that are particularly nice about the first basis

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

INNER PRODUCT SPACE. Definition 1

Chapter 3 Transformations

Section 6.1. Inner Product, Length, and Orthogonality

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Lecture 3: Review of Linear Algebra

Math Linear Algebra

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

March 27 Math 3260 sec. 56 Spring 2018

LINEAR ALGEBRA W W L CHEN

Lecture 10: Vector Algebra: Orthogonal Basis

Linear Algebra Massoud Malek

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra

Solutions to Review Problems for Chapter 6 ( ), 7.1

Orthogonal Complements

Inner Product, Length, and Orthogonality

2. Review of Linear Algebra

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

MTH 2032 SemesterII

Chapter 4 Euclid Space

Linear Models Review

A Review of Linear Algebra

MTH 2310, FALL Introduction

Chapter 6 Inner product spaces

Vector Spaces. distributive law u,v. Associative Law. 1 v v. Let 1 be the unit element in F, then

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

10 Orthogonality Orthogonal subspaces

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

The Hilbert Space of Random Variables

Linear Combination. v = a 1 v 1 + a 2 v a k v k

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Applied Linear Algebra in Geoscience Using MATLAB

A Primer in Econometric Theory

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 25

6.1. Inner Product, Length and Orthogonality

Announcements Wednesday, November 15

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

(v, w) = arccos( < v, w >

MAT Linear Algebra Collection of sample exams

4 Linear Algebra Review

Lecture 3: Linear Algebra Review, Part II

Math 550 Notes. Chapter 2. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Solutions to practice questions for the final

Section 6.4. The Gram Schmidt Process

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Math Real Analysis II

Announcements Wednesday, November 15

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Vectors. September 2, 2015

Lecture 6: Geometry of OLS Estimation of Linear Regession

Linear Algebra Review. Vectors

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Lecture 1: Basic Concepts

(v, w) = arccos( < v, w >

Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality

Math Linear Algebra II. 1. Inner Products and Norms

Dot product and linear least squares problems

7 Bilinear forms and inner products

MATH 115A: SAMPLE FINAL SOLUTIONS

2.4 Hilbert Spaces. Outline

LINEAR ALGEBRA SUMMARY SHEET.

Introduction to Signal Spaces

6. Orthogonality and Least-Squares

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

(v, w) = arccos( < v, w >

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

6 Inner Product Spaces

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

The Gram-Schmidt Process 1

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

x 1 + 2x 2 + 3x 3 = 0 x 1 + 2x 2 + 3x 3 = 0, x 2 + x 3 = 0 x 3 3 x 3 1

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MTH 309Y 37. Inner product spaces. = a 1 b 1 + a 2 b a n b n

Lecture 4 Orthonormal vectors and QR factorization

MATH 260 LINEAR ALGEBRA EXAM III Fall 2014

Transcription:

Vectors Vectors and the scalar multiplication and vector addition operations: x 1 x 1 y 1 2x 1 + 3y 1 x x n 1 = 2 x R n, 2 2 y + 3 2 2x = 2 + 3y 2............ x n x n y n 2x n + 3y n I ll use the two terms vector and point interchangeable: any point R n corresponds to a vector starting from the origin and ending at that point. 1

The inner (dot or cross) product of two vectors is defined to be u t v = i u i v i = u v cos(θ), where u denotes the norm of a vector u = u t u = u 2 i, i and θ is the angle between the two vectors. A unit vector is a vector whose norm is 1. When two vectors are orthogonal, we mean cos(θ) = 0, therefore u t v = 0, denoted by u v. The Euclidean distance between two vectors u and v is u v. 2

Linear Combinations & Spans A linear combination of vectors x 1,..., x p is b 1 x 1 + b 2 x 2 + + b p x p, b 1,..., b p R. Span(x 1,..., x p ) denotes the set of all linear combinations of the p vectors, x 1,..., x p. Concatenate the p vectors into a matrix X n p = (x 1 x p ). The column space of X is C(X) = Span(x 1,..., x p ). Any vector in C(X) can be written as X n p b p 1, where b = (b 1,..., b p ) t. 3

Linear Subspace Let M be a collection of vectors from R n. M is a vector space if M is closed under linear combinations, that is, if u, v M, then au + bv M, where a, b R. You can image a linear subspace as a bag of vectors, and for any two vectors in of that bag, say u and v (the two vectors could be the same, i.e., you are allowed to create copies of vectors in that bag), their linear combination, say u 2v, should also be in that bag. Apparently, we have u u = 0, so 0 is in any linear subspace. (i.e., any linear subspace should pass the origin). 4

Examples of vector spaces 0 (the smallest subspace) R n (the largest one) Collection of all points on a line passing through the origin A span of (v 1,..., v m ) is a linear subspace The column space of a matrix X n p, C(X). Apparently, if we change the order of the columns of X, the column space C(X) remains the same. M and N are vector spaces, so are M N and M + N = {u + v, u M, v N }. But M N may not. 5

Replacement Rule Given p vectors: x 1,..., x p, define z 1 = a 1 x 1 + a 2 x 2 + + a p x p. If a 1 0, then the two follow subspaces are the same Span(x 1, x 2,..., x p ) = Span(z 1, x 2,..., x p ). Similarly, let X be a new matrix which is the same as X except that we replace the jth column by a linear combination, X[, j] = a j X[, j] + i j a i X[, i]. If a j 0, then C( X) = C(X). 6

Orthogonality Vector Vector: u v if u t v = 0. Vector Subspace: u M, if u is orthogonal to any vector from M. For example, if u is orthogonal to each column of a matrix X, then we have u C(X). Subspace Subspace: Similarly we can define M N, if any vector from M is orthogonal to any vector from N. Define the orthogonal complement of subspace V as M = {x R n : x M}. It is easy to show that M is a subspace. Apparently M M. 7

Linear Independence A set of vectors v 1,..., v m is said to be linear independent, if c 1 v 1 + + c m v m = 0 iff c 1 = = c m = 0. Otherwise they are linear dependent. In other words, if a set of vectors are linear independent, then no one can be expressed as a linear combination of the others; if they are linear dependent, then there exists one vector, say v 2, which can be written as a linear combination of v 1, v 3,..., v m. 8

Linear Independence and Bases A set of vectors {u 1,..., u m } is a basis for a subspace M, if 1. Span(u 1,..., u m ) = M, and 2. u 1,..., u m are linear independent. That is, a basis is a set of vectors that spans a linear subspace M without redundancy. An orthonormal basis (ONB), if in addition u j s are unit vectors and orthogonal to each other. 9

If {u 1,..., u m } is a basis for a subspace M, then any vector in M can be uniquely represented by the linear combination of u i s. That is, if we can write a vector v as v = c 1 u 1 + c 2 u 2 + + c m u m, and also v = a 1 u 1 + a 2 u 2 + + a m u m, then c i = a i for all i = 1 : m. Bases are not unique. That is, a linear space M has more than one bases, but the number of vectors in each basis is the same, which is the rank/dim of M. If M has dim p, then any set of more than p vectors from M is linear dependent. If M has dim p, then any set of p linearly independent vectors in M form a basis for M. 10

Gram-Schmidt Procedure Given a basis {x 1,..., x p }, we can use the Gram-Schmidt algorithm to obtain an ONB. u 1 = x 1, e 1 = u 1 u 1 u 2 = x 2 (x t 2e 1 )e 1, e 2 = u 2 u 2 u k+1 = x k+1 k j=1 (xt j e j)e j, e k+1 = u k+1 u k+1 The resulting ONB is {e 1,..., e p }. 11

OLS Solution Consider a linear model y i = x i1 β 1 + + x ip β p + err i, i = 1,..., n Using the LS principal, we aim to find β = (β 1,..., β p ) t, which minimizes n (y i x i1 β 1 x ip β p ) 2. i=1 Using the matrix form, we can write the linear model as and solve y n 1 = X n p β p 1 + e, min y β R Xβ 2. (1) p 12

The LS optimization min y β R Xβ 2. p is equivalent to finding a vector z in C(X), such that y z = min y z C(X) z 2. Once we solve z, we then go back to find its representation β. 13