Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017

Similar documents
Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

5. Orthogonal matrices

Least Squares. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 25

The conjugate gradient method

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

There are two things that are particularly nice about the first basis

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Lecture 1: Basic Concepts

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

INNER PRODUCT SPACE. Definition 1

Lecture 4 Orthonormal vectors and QR factorization

Vectors. Vectors and the scalar multiplication and vector addition operations:

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Lecture 10: Vector Algebra: Orthogonal Basis

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

ELE/MCE 503 Linear Algebra Facts Fall 2018

Applied Linear Algebra in Geoscience Using MATLAB

96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2.

Math 3191 Applied Linear Algebra

Chapter 10 Conjugate Direction Methods

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Chapter 4 Euclid Space

Linear Analysis Lecture 16

The Conjugate Gradient Method

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Math 407: Linear Optimization

Lecture 3: QR-Factorization

The QR Factorization

MATH 235: Inner Product Spaces, Assignment 7

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Homework 11 Solutions. Math 110, Fall 2013.

Course Notes: Week 1

Conjugate Gradients I: Setup

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Numerical Methods I: Orthogonalization and Newton s method

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

1.3.3 Basis sets and Gram-Schmidt Orthogonalization

5 Compact linear operators

The Shortest Vector Problem (Lattice Reduction Algorithms)

MTH 2032 SemesterII

1 angle.mcd Inner product and angle between two vectors. Note that a function is a special case of a vector. Instructor: Nam Sun Wang. x.

The Gram-Schmidt Process 1

Parallel Singular Value Decomposition. Jiaxing Tan

Nonlinear Least Squares

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

Math 24 Spring 2012 Sample Homework Solutions Week 8

Mathematics Department Stanford University Math 61CM/DM Inner products

MIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS

A Review of Linear Algebra

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

7 Bilinear forms and inner products

Linear Algebra in Actuarial Science: Slides to the lecture

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Functional Analysis Exercise Class

Homework 5. (due Wednesday 8 th Nov midnight)

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

Introduction to Signal Spaces

Chapter 3 Transformations

Linear algebra II Homework #1 due Thursday, Feb A =

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Solutions to Review Problems for Chapter 6 ( ), 7.1

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Chap 3. Linear Algebra

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Linear Algebra 2 Spectral Notes

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

2. Signal Space Concepts

Diagonalizing Matrices

Chapter 6 Inner product spaces

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

6 Inner Product Spaces

LECTURE 6 MUTUAL INDUCTANCE

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Chapter 6: Orthogonality

Solutions: Problem Set 3 Math 201B, Winter 2007

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

MTH 2310, FALL Introduction

The Gram Schmidt Process

Linear Models Review

The Gram Schmidt Process

Eigenvalues and Eigenvectors A =

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

(v, w) = arccos( < v, w >

This can be accomplished by left matrix multiplication as follows: I

A Vector Space Justification of Householder Orthogonalization

Math 396. An application of Gram-Schmidt to prove connectedness

Transcription:

Linear Independence Stephen Boyd EE103 Stanford University October 9, 2017

Outline Linear independence Basis Orthonormal vectors Gram-Schmidt algorithm Linear independence 2

Linear dependence set of n-vectors {a 1,..., a k } (with k 1) is linearly dependent if β 1 a 1 + + β k a k = 0 holds for some β 1,..., β k, that are not all zero equivalent to: at least one a i is a linear combination of the others we say a 1,..., a k are linearly dependent {a 1 } is linearly dependent only if a 1 = 0 {a 1, a 2 } is linearly dependent only if one a i is a multiple of the other for more than 2 vectors, there is no simple to state condition Linear independence 3

Example the vectors a 1 = 0.2 7 8.6, a 2 = 0.1 2 1, a 3 = 0 1 2.2 are linearly dependent, since a 1 + 2a 2 3a 3 = 0 can express any of them as linear combination of the other two, e.g., a 2 = ( 1/2)a 1 + (3/2)a 3 Linear independence 4

Linear independence set of n-vectors {a 1,..., a k } (with k 1) is linearly independent if it is not linearly dependent, i.e., β 1 a 1 + + β k a k = 0 holds only when β 1 = = β k = 0 we say a 1,..., a k are linearly independent equivalent to: no a i is a linear combination of the others example: the unit n-vectors e 1,..., e n are linearly independent Linear independence 5

Linear combinations of linearly independent vectors suppose x is a linear combination of linearly independent vectors a 1,..., a k, x = β 1 a 1 + + β k a k the coefficients β 1,..., β k are unique, i.e., if x = γ 1 a 1 + + γ k a k then β i = γ i, i = 1,..., k this means that (in principle) we can deduce the coefficients from x to see why, note that (β 1 γ 1 )a 1 + + (β k γ k )a k = 0 and so (by independence) β 1 γ 1 = = β k γ k = 0 Linear independence 6

Outline Linear independence Basis Orthonormal vectors Gram-Schmidt algorithm Basis 7

Independence-dimension inequality a linearly independent set of n-vectors can have at most n elements put another way: any set of n + 1 or more n-vectors is linearly dependent Basis 8

Basis a set of n linearly independent n-vectors a 1,..., a n is called a basis any n-vector b can be expressed as a linear combination of them: b = α 1 a 1 + + α n a n for some α 1,..., α n and these coefficients are unique formula above is called expansion of b in the a 1,..., a n basis example: e 1,..., e n is a basis expansion is b = b 1e 1 + + b ne n Basis 9

Outline Linear independence Basis Orthonormal vectors Gram-Schmidt algorithm Orthonormal vectors 10

Orthonormal vectors set of n-vectors a 1,..., a k are (mutually) orthogonal if a i a j for i j they are normalized if a i = 1 for i = 1,..., k they are orthonormal if both hold can be expressed using inner products as { a T 1 i = j i a j = 0 i j orthonomal sets of vectors are independent by independence-dimension inequality, must have k n when k = n, a 1,..., a n are an orthonormal basis Orthonormal vectors 11

Examples of orthonormal bases standard unit n-vectors e 1,..., e n the 3-vectors 0 0 1, the 2-vectors shown below 1 1 1 2 0, 1 1 1 2 0 Orthonormal vectors 12

Orthonormal expansion if a 1,..., a n is an o.n. basis, we have for any n-vector x x = (a T 1 x)a 1 + + (a T n x)a n called orthonormal expansion of x (in the o.n. basis) to verify formula, take inner product of both sides with a i Orthonormal vectors 13

Outline Linear independence Basis Orthonormal vectors Gram-Schmidt algorithm Gram-Schmidt algorithm 14

Gram-Schmidt (orthogonalization) algorithm an algorithm to check if a 1,..., a k are linearly independent we ll see later it has many other uses Gram-Schmidt algorithm 15

Gram-Schmidt algorithm given n-vectors a 1,..., a k for i = 1,..., k, 1. Orthogonalization. q i = a i (q T 1 a i )q 1 (q T i 1 a i)q i 1 2. Test for dependence. if q i = 0, quit 3. Normalization. q i = q i / q i if G-S does not stop early (in step 2), a 1,..., a k are linearly independent if G-S stops early in iteration i = j, then a j is a linear combination of a 1,..., a j 1 (so a 1,..., a k are linearly dependent) Gram-Schmidt algorithm 16

a 1 a 2 q 1 a2 q 1 a 2 q 1 a2 (q T 1 a 2 )q 1 q 1 q 2 q 2 Gram-Schmidt algorithm 17

Analysis let s show by induction that q 1,..., q i are orthonormal assume it s true for i 1 orthogonalization step ensures that q i q 1,..., q i q i 1 to see this, take inner product of both sides with q j, j < i qj T q i = qj T a i (q1 T a i )(qj T q 1 ) (qi 1a T i )(qj T q i 1 ) = qj T a i qj T a i = 0 so q i q 1,..., q i q i 1 normalization step ensures that q i = 1 Gram-Schmidt algorithm 18

Analysis assuming G-S has not terminated before iteration i a i is a linear combination of q 1,..., q i : a i = q i q i + (q T 1 a i )q 1 + + (q T i 1a i )q i 1 q i is a linear combination of a 1,..., a i : by induction on i, q i = (1/ q i ) ( a i (q T 1 a i )q 1 (q T i 1a i )q i 1 ) and (by induction assumption) each q 1,..., q i 1 is a linear combination of a 1,..., a i 1 Gram-Schmidt algorithm 19

Early termination suppose G-S terminates in step j a j is linear combination of q 1,..., q j 1 a j = (q T 1 a j )q 1 + + (q T j 1a j )q j 1 and each of q 1,..., q j 1 is linear combination of a 1,..., a j 1 so a j is a linear combination of a 1,..., a j 1 Gram-Schmidt algorithm 20

Complexity of Gram-Schmidt algorithm step 1 of iteration i requires i 1 inner products, which costs (i 1)(2n 1) flops n(i 1) flops to compute q i 3n flops to compute q i and q i total is i=1 q T 1 a i,..., q T i 1a i k k(k 1) ((4n 1)(i 1) + 3n) = (4n 1) 2 using k i=1 (i 1) = k(k 1)/2 + 3nk 2nk 2 Gram-Schmidt algorithm 21