Vector Spaces and Subspaces

Similar documents
Vector Spaces and Subspaces

5.4 Independence, Span and Basis

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 313 Chapter 5 Review

2. Every linear system with the same number of equations as unknowns has a unique solution.

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Rank, Nullity, Dimension and Elimination

Linear Algebra. Chapter 5. Contents

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Math 369 Exam #2 Practice Problem Solutions

Linear Independence. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Solutions to Math 51 First Exam April 21, 2011

Chapter 4 & 5: Vector Spaces & Linear Transformations

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Math 102, Winter 2009, Homework 7

Matrix invertibility. Rank-Nullity Theorem: For any n-column matrix A, nullity A +ranka = n

Basis, Dimension, Kernel, Image

is Use at most six elementary row operations. (Partial

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Definitions for Quizzes

Chapter 2 Subspaces of R n and Their Dimensions

Basis, Dimension, Kernel, Image

Basic Test. To show three vectors are independent, form the system of equations

Lecture 21: 5.6 Rank and Nullity

Vector Space Concepts

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

LINEAR ALGEBRA REVIEW

2.3. VECTOR SPACES 25

MATH 2360 REVIEW PROBLEMS

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Lecture 9: Vector Algebra

Solutions of Linear system, vector and matrix equation

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

MAT Linear Algebra Collection of sample exams

Definitions. Main Results

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

Abstract Vector Spaces

How to Solve Linear Differential Equations

x y + z = 3 2y z = 1 4x + y = 0

(a). W contains the zero vector in R n. (b). W is closed under addition. (c). W is closed under scalar multiplication.

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

1. General Vector Spaces

2018 Fall 2210Q Section 013 Midterm Exam II Solution

GEOMETRY OF MATRICES x 1

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Chapter 2: Matrices and Linear Systems

Mathematics 1350H Linear Algebra I: Matrix Algebra Trent University, Summer 2017 Solutions to the Quizzes

Algorithms to Compute Bases and the Rank of a Matrix

SSEA Math 51 Track Final Exam August 30, Problem Total Points Score

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Lecture Summaries for Linear Algebra M51A

2. (10 pts) How many vectors are in the null space of the matrix A = 0 1 1? (i). Zero. (iv). Three. (ii). One. (v).

Chapter 13: General Solutions to Homogeneous Linear Differential Equations

Math 313 (Linear Algebra) Exam 2 - Practice Exam

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Test #2 Math 2250 Summer 2003

(b) The nonzero rows of R form a basis of the row space. Thus, a basis is [ ], [ ], [ ]

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v

Atoms An atom is a term with coefficient 1 obtained by taking the real and imaginary parts of x j e ax+icx, j = 0, 1, 2,...,

Linear transformations

Solution: (a) S 1 = span. (b) S 2 = R n, x 1. x 1 + x 2 + x 3 + x 4 = 0. x 4 Solution: S 5 = x 2. x 3. (b) The standard basis vectors

Math 4377/6308 Advanced Linear Algebra

APPM 2360 Exam 2 Solutions Wednesday, March 9, 2016, 7:00pm 8:30pm

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

MTH 362: Advanced Engineering Mathematics

The definition of a vector space (V, +, )

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

SUMMARY OF MATH 1600

MATH 24 EXAM 3 SOLUTIONS

Linear Algebra. Preliminary Lecture Notes

Math 54 HW 4 solutions

ANSWERS. E k E 2 E 1 A = B

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Chapter 2. General Vector Spaces. 2.1 Real Vector Spaces

Math 344 Lecture # Linear Systems

Linear Transformations: Kernel, Range, 1-1, Onto

Eigenspaces. (c) Find the algebraic multiplicity and the geometric multiplicity for the eigenvaules of A.

CITY UNIVERSITY LONDON

Homework 5 M 373K Mark Lindberg and Travis Schedler

Linear Algebra Practice Problems

Linear Algebra. Preliminary Lecture Notes

Elementary Linear Algebra

Linear Algebra 1 Exam 2 Solutions 7/14/3

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Topic 14 Notes Jeremy Orloff

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

ANSWERS. Answer: Perform combo(3,2,-1) on I then combo(1,3,-4) on the result. The elimination matrix is

ECON 186 Class Notes: Linear Algebra

Chapter 4. Solving Systems of Equations. Chapter 4

4.9 The Rank-Nullity Theorem

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Linear independence, span, basis, dimension - and their connection with linear systems

Transcription:

Vector Spaces and Subspaces Vector Space V Subspaces S of Vector Space V The Subspace Criterion Subspaces are Working Sets The Kernel Theorem Not a Subspace Theorem Independence and Dependence in Abstract spaces Independence test for two vectors v 1, v 2. An Illustration. Geometry and Independence Rank Test and Determinant Test for Fixed Vectors Sampling Test and Wronskian Test for Functions. Independence of Atoms

Vector Space V It is a data set V plus a toolkit of eight (8) algebraic properties. The data set consists of packages of data items, called vectors, denoted X, Y below. Closure The operations X + Y and k X are defined and result in a new vector which is also in the set V. Addition X + Y = Y + X commutative X + ( Y + Z) = ( Y + X) + Z associative Vector 0 is defined and 0 + X = X zero Vector X is defined and X + ( X) = 0 negative Scalar k( X + Y ) = k X + k Y distributive I multiply (k 1 + k 2 ) X = k 1 X + k 2 X distributive II k 1 (k 2 X) = (k 1 k 2 ) X distributive III 1 X = X identity Operations +. Data Set Toolkit The 8 Properties Figure 1. A Vector Space is a data set, operations + and, and the 8-property toolkit.

Definition of Subspace A subspace S of a vector space V is a nonvoid subset of V which under the operations + and of V forms a vector space in its own right. Subspace Criterion Let S be a subset of V such that 1. Vector 0 is in S. 2. If X and Y are in S, then X + Y is in S. 3. If X is in S, then c X is in S. Then S is a subspace of V. Items 2, 3 can be summarized as all linear combinations of vectors in S are again in S. In proofs using the criterion, items 2 and 3 may be replaced by c 1 X + c 2 Y is in S.

Subspaces are Working Sets We call a subspace S of a vector space V a working set, because the purpose of identifying a subspace is to shrink the original data set V into a smaller data set S, customized for the application under study. A Key Example. Let V = R 3 and let S be the plane of action of a planar kinematics experiment, a slot car on a track. The data set D for the experiment is all 3-vectors v in V collected by a data recorder. Detected by GPS and recorded by computer is the 3D position of the slot car, which would be planar, except for bumps in the track. The original data set D will be transformed into a new data set D 1 that lies entirely in a plane. Plane geometry computations then proceed with the Toolkit for V, on the smaller planar data set D 1. How to create D 1? The ideal plane of action S is computed as a homogeneous equation (like 2x+3y+1000z=0), the equation of a plane. Least squares is applied to data set D to find an optimal equation for S. Altered data set D 1 is created from D (and D discarded) to artificially satisfy the plane equation. Then D 1 is contained in S (D obviously was not). The smaller storage set S replaces the larger storage set V. The customized smaller set S is the working set for the kinematics problem.

The Kernel Theorem Theorem 1 (Kernel Theorem) Let V be one of the vector spaces R n and let A be an m n matrix. Define a smaller set S of data items in V by the kernel equation S = { x : x in V, A x = 0}. Then S is a subspace of V. In particular, operations of addition and scalar multiplication applied to data items in S give answers back in S, and the 8-property toolkit applies to data items in S. Proof: Zero is in S because A 0 = 0 for any matrix A. To apply the subspace criterion, we verify that z = c 1 x + c 2 y belongs to S, provided x and y are in S. The details: A z = A(c 1 x + c 2 y) = A(c 1 x) + A(c 2 y) = c 1 A x + c 2 A y = c 1 0 + c 2 0 Because A x = A y = 0, due to x, y in S. = 0 Therefore, A z = 0, and z is in S. The proof is complete.

The Kernel Theorem in Plain Language The Kernel Theorem says that a subspace criterion proof can be avoided by checking that data set S, a subset of a vector space R n, is completely described by a system of homogeneous linear algebraic equations. Applying the Kernel Theorem replaces a formal proof, because the conclusion is that S is a subspace of R n. Details, if any, amount to showing that the defining relations for S can be expressed as a system of homogeneous linear algebraic equations. One way to do this is to write the relations in matrix form A x = 0. Example Let S be the data set in R 3 given as the intersection of the two planes x + y + z = 0, x + 2y z = 0. Then S is a subspace of R 3 by the Kernel Theorem. The reason: linear homogeneous algebraic equations x + y + z = 0, x + 2y z = 0 completely describe S. By the Kernel Theorem, S is a subspace of R 3. Presentation details might, 1 1 1 alternatively, define matrix A = 1 2 1 and verify S is the kernel of A. 0 0 0

Not a Subspace Theorem Theorem 2 (Testing S not a Subspace) Let V be an abstract vector space and assume S is a subset of V. Then S is not a subspace of V provided one of the following holds. (1) The vector 0 is not in S. (2) Some x and x are not both in S. (3) Vector x + y is not in S for some x and y in S. Proof: The theorem is justified from the Subspace Criterion. 1. The criterion requires 0 is in S. 2. The criterion demands c x is in S for all scalars c and all vectors x in S. 3. According to the subspace criterion, the sum of two vectors in S must be in S.

Definition of Independence and Dependence A list of vectors v 1,..., v k in a vector space V are said to be independent provided every linear combination of these vectors is uniquely represented. Dependent means not independent. Unique representation An equation a 1 v 1 + + a k v k = b 1 v 1 + + b k v k implies matching coefficients: a 1 = b 1,..., a k = b k. Independence Test Form the system in unknowns c 1,..., c k c 1 v 1 + + c k v k = 0. Solve for the unknowns [how to do this depends on V ]. Then the vectors are independent if and only if the unique solution is c 1 = c 2 = = c k = 0.

Independence test for two vectors v 1, v 2 In an abstract vector space V, form the equation c 1 v 1 + c 2 v 2 = 0. Solve this equation for the two constants c 1, c 2. Then v 1, v 2 are independent in V if and only if the system has unique solution c 1 = c 2 = 0. There is no algorithm for how to do this, because it depends on the vector space V and sometimes on detailed information obtained from bursting the data packages v 1, v 2. If V is some R n, then combo-swap-mult sequences apply.

Geometry and Independence One fixed vector is independent if and only if it is nonzero. Two fixed vectors are independent if and only if they form the edges of a parallelogram of positive area. Three fixed vectors are independent if and only if they are the edges of a parallelepiped of positive volume. In an abstract vector space V, one vector [one data package] is independent if and only if it is a nonzero vector. Two vectors [two data packages] are independent if and only if one is not a scalar multiple of the other. There is no simple test for three vectors. Illustration Vectors v 1 = cos x and v 2 = sin x are two data packages [graphs] in the vector space V of continuous functions. They are independent because one graph is not a scalar multiple of the other graph.

An Illustration of the Independence Test Two column vectors are tested for independence by forming the system of equations c 1 v 1 + c 2 v 2 = 0, e.g, ( ) ( ) ( ) 1 2 0 c 1 + c 1 2 =. 1 0 This is a homogeneous system A c = 0 with ( ) 1 2 A =, c = 1 1 ( c1 c 2 ). The system A c = 0 can be solved for c by combo-swap-mult methods. Because rref(a) = I, then c 1 = c 2 = 0, which verifies independence. If the system A c = 0 is square, then det(a) 0 applies to test independence. Determinants cannot be used directly when the system is not square, e.g., consider the homogeneous system c 1 1 1 0 + c2 2 1 0 = 0 0 0 It has vector-matrix form A c = 0 with 3 2 matrix A, for which det(a) is undefined..

Rank Test In the vector space R n, the independence test leads to a system of n linear homogeneous equations in k variables c 1,..., c k. The test requires solving a matrix equation A c = 0. The signal for independence is zero free variables, or nullity zero, or equivalently, maximal rank. To justify the various statements, we use the relation nullity(a) + rank(a) = k, where k is the column dimension of A. Theorem 3 (Rank-Nullity Test) Let v 1,..., v k be k column vectors in R n and let A be the augmented matrix of these vectors. The vectors are independent if rank(a) = k and dependent if rank(a) < k. The conditions are equivalent to nullity(a) = 0 and nullity(a) > 0, respectively.

Determinant Test In the unusual case when the system arising in the independence test can be expressed as A c = 0 and A is square, then det(a) = 0 detects dependence, and det(a) 0 detects independence. The reasoning is based upon the adjugate formula A 1 = adj(a)/ det(a), valid exactly when det(a) 0. Theorem 4 (Determinant Test) Let A be a square augmented matrix of column vectors. The column vectors are independent if det(a) 0 and dependent if det(a) = 0.

Sampling Test Let functions f 1,..., f n be given and let x 1,..., x n be distinct x-sample values. Define f 1 (x 1 ) f 2 (x 1 ) f n (x 1 ) A = f 1 (x 2 ) f 2 (x 2 ) f n (x 2 ).... f 1 (x n ) f 2 (x n ) f n (x n ) Then det(a) 0 implies f 1,..., f n are independent functions. Proof We ll do the proof for n = 2. Details are similar for general n. Assume c 1 f 1 + c 2 f 2 = 0. Then for all x, c 1 f 1 (x) + c 2 f 2 (x) = 0. Choose x = x 1 and x = x 2 in this relation to get A c = 0, where c has components c 1, c 2. If det(a) 0, then A 1 exists, and this in turn implies c = A 1 A c = 0. We conclude f 1, f 2 are independent.

Wronskian Test Let functions f 1,..., f n be given and let x 0 be a given point. Define f 1 (x 0 ) f 2 (x 0 ) f n (x 0 ) f 1 W = (x 0) f (x 2 0) f (x n 0).... f (n 1) 1 (x 0 ) f (n 1) 2 (x 0 ) f (n 1) n (x 0 ) Then det(w ) 0 implies f 1,..., f n are independent functions. The matrix W is called the Wronskian matrix of f 1,..., f n and det(w ) is called the Wronskian determinant. Proof We ll do the proof for n = 2. Details are similar for general n. Assume c 1 f 1 + c 2 f 2 = 0. Then for all x, c 1 f 1 (x) + c 2 f 2 (x) = 0 and c 1 f 1 (x) + c 2f 2 (x) = 0. Choose x = x 0 in this relation to get W c = 0, where c has components c 1, c 2. If det(w ) 0, then W 1 exists, and this in turn implies c = W 1 W c = 0. We conclude f 1, f 2 are independent.

Atoms Definition. A base atom is one of the functions 1, e ax, cos bx, sin bx, e ax cos bx, e ax sin bx where b > 0. Define an atom for integers n 0 by the formula atom = x n (base atom). The powers 1, x, x 2,... are atoms (multiply base atom 1 by x n ). Multiples of these powers by cos bx, sin bx are also atoms. Finally, multiplying all these atoms by e ax expands and completes the list of atoms. Alternatively, an atom is a function with coefficient 1 obtained as the real or imaginary part of the complex expression x n e ax (cos bx + i sin bx).

Illustration We show the powers 1, x, x 2, x 3 are independent atoms by applying the Wronskian Test: 1 x 0 x 2 0 x 3 0 W = 0 1 2x 0 3x 2 0 0 0 2 6x 0. 0 0 0 6 Then det(w ) = 12 0 implies the functions 1, x, x 2, x 3 are linearly independent.

Subsets of Independent Sets are Independent Suppose v 1, v 2, v 3 make an independent set and consider the subset v 1, v 2. If then also c 1 v 1 + c 2 v 2 = 0 c 1 v 1 + c 2 v 2 + c 3 v 3 = 0 where c 3 = 0. Independence of the larger set implies c 1 = c 2 = c 3 = 0, in particular, c 1 = c 2 = 0, and then v 1, v 2 are indpendent. Theorem 5 (Subsets and Independence) A non-void subset of an independent set is also independent. Non-void subsets of dependent sets can be independent or dependent.

Atoms and Independence Theorem 6 (Independence of Atoms) Any list of distinct atoms is linearly independent. Unique Representation The theorem is used to extract equations from relations involving atoms. For instance: implies (c 1 c 2 ) cos x + (c 1 + c 3 ) sin x + c 1 + c 2 = 2 cos x + 5 c 1 c 2 = 2, c 1 + c 3 = 0, c 1 + c 2 = 5.

Atoms and Differential Equations It is known that solutions of linear constant coefficient differential equations of order n and also systems of linear differential equations with constant coefficients have a general solution which is a linear combination of atoms. The harmonic oscillator y + b 2 y = 0 has general solution y(x) = c 1 cos bx + c 2 sin bx. This is a linear combination of the two atoms cos bx, sin bx. The third order equation y + y = 0 has general solution y(x) = c 1 cos x + c 2 sin x + c 3. The solution is a linear combination of the independent atoms cos x, sin x, 1. The linear dynamical system x (t) = y(t), y (t) = x(t) has general solution x(t) = c 1 cos t + c 2 sin t, y(t) = c 1 sin t + c 2 cos t, each of which is a linear combination of the independent atoms cos t, sin t.