MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Similar documents
MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Math 407: Linear Optimization

Review problems for MA 54, Fall 2004.

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

NOTES on LINEAR ALGEBRA 1

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

There are two things that are particularly nice about the first basis

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Linear Algebra. Min Yan

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

EE731 Lecture Notes: Matrix Computations for Signal Processing

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Linear Algebra. Grinshpan

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Linear Algebra Massoud Malek

7. Dimension and Structure.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Quizzes for Math 304

Algorithms to Compute Bases and the Rank of a Matrix

(v, w) = arccos( < v, w >

Chapter 6 - Orthogonality

Lecture 3: Linear Algebra Review, Part II

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Models Review

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Transpose & Dot Product

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

1. General Vector Spaces

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

The Gram Schmidt Process

The Gram Schmidt Process

complex dot product x, y =

PRACTICE PROBLEMS FOR THE FINAL

Transpose & Dot Product

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

MATH 260 LINEAR ALGEBRA EXAM III Fall 2014

COMP 558 lecture 18 Nov. 15, 2010

Problem 1: Solving a linear equation

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Algebra, Summer 2011, pt. 3

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Methods of Mathematical Physics X1 Homework 2 - Solutions

SUMMARY OF MATH 1600

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Inner product spaces. Layers of structure:

Spring 2014 Math 272 Final Exam Review Sheet

LINEAR ALGEBRA SUMMARY SHEET.

The SVD-Fundamental Theorem of Linear Algebra

(v, w) = arccos( < v, w >

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Eigenvalues and Eigenvectors A =

Linear Algebra Practice Problems

Solutions to Final Exam

Solutions to Section 2.9 Homework Problems Problems 1 5, 7, 9, 10 15, (odd), and 38. S. F. Ellermeyer June 21, 2002

Vector Spaces, Orthogonality, and Linear Least Squares

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Midterm 1 Solutions Math Section 55 - Spring 2018 Instructor: Daren Cheng

Chapter Two Elements of Linear Algebra

(v, w) = arccos( < v, w >

Overview. Motivation for the inner product. Question. Definition

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

MAT Linear Algebra Collection of sample exams

Lecture 1: Systems of linear equations and their solutions

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Math 265 Midterm 2 Review

Linear Analysis Lecture 16

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

18.06 Professor Johnson Quiz 1 October 3, 2007

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Math Linear Algebra II. 1. Inner Products and Norms

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Transcription:

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x V 1 : Lx = 0} Note: The null space of L is sometimes called the kernel of L. Examples: ( ) 1 1 i) Lx Ax x = 0 then N (A) = span{(1, 1)} R 1 1 2. ii) Lf defined by (Lf)(t) f (t) for f C 2 [a, b] then N (L) = span{1, t}. iii) L : C 0 [ 1, 1] R defined by Lf 1 1 f(s)ds then N (L) contains the subspace of all odd continuous functions on [ 1, 1] plus many other functions such as f(t) = t 2 1 / 3. We shall now restrict ourselves to m n real matrices. We note that always 0 N (A). If this is the only vector in N (A), i.e., if N (A) = {0} then the null space is the trivial null space with dimension 0. We also know from Ax = n x j A j that R(A) = span{a 1,..., A n } R m. The range of A is often called the column space of A and the dimension of this space is called the rank of A, i.e., r(a) = rank(a) = dim R(A) = dim column space of A. We note that r(a) < min{m, n}. Example: Let x and y be two column vectors in R n. Then the n n matrix x y T = (y 1 x y 2 x y n x) is a matrix with rank 1 since every column is a multiple of x. 47

Theorem: Let A be an m n matrix. Then dim N (A) + rank(a) = n. Proof: Let {y 1,..., y r } be a basis of R(A). Let {x 1,..., x r } be the vectors which satisfy Ax j = y j for j = 1,..., r. Let {z 1,..., z p } be a basis of N (A). Then the vectors {x 1,..., x j, z 1,..., z p } are linearly independent because if then α j x j + A α j x j + p β j z j = 0 p β j z j = α j y j = 0 which implies that α 1 = α 2 = = α r = 0. But then the linear independence of the {z j } implies that the {β j } also must vanish. Finally, let x be arbitrary in R n. Then Ax = ( r γ jy j for some {γ j }. This implies that A x ) r γ jx j = 0 so that x r γ jx j N (A), i.e., p x γ j x j = β j z j. Hence the linearly independent vectors {x 1,..., x r, z 1,..., z p } span R n and r + p rank(a) + dim N (A) = n. It follows immediately that if A is an m n matrix and m < n then dim N (A) 1 because rank(a) min{m, n}. In particular, this implies that Ax = 0 has a non-zero solution so that such a matrix cannot have an inverse. So far we have looked at the columns of A as n column vectors in R m. Likewise, the m rows of A define a set of m vectors in R n. What can we say about the number of linearly independent rows of A? We recall from the homework of Module 2 that if x, y denotes the dot product then Ax, y = x, A T y 48

for x R n and y R m. Next, let {y 1, y 2,..., y r } be a basis of R(A) and apply the Gram-Schmidt orthogonalization process to the vectors {y 1, y 2,..., y r, ê 1, ê 2,..., ê m } then the first r orthogonal vectors will be a basis of R(A) and the remaining m r vectors {Y 1, Y 2,..., Y m r } will be orthogonal to R(A). Since A T Y j R n it it follows from A T Y j, A T Y j = Y j, A(A T Y j ) = 0 that A T Y j = 0 so that dim N (A T ) (m r). Finally, we observe that if Ax 0 then A T (Ax), x > 0 so that Ax cannot belong to N (A T ). Hence dim N (A T ) = m r so that rank(a T ) = number of linearly independent rows of A = m (m r) = r. In other words, an m n matrix has as many independent rows as columns. Finally, we observe that if we add to any row of A a linear combination of the remaining rows we do not change the number of independent rows. Hence we can apply Gaussian elimination to the rows of A and read off the number of independent rows of A from the final form of A where all elements below the diagonal are zero. Implications for the solution of the linear system Ax = b where A is an m n matrix. 1) We shall assume that b R(A). i) If the columns of A are linearly independent then Ax = b has a unique solution regardless of the size of the system. In this case the inverse mapping exists for every element y R(A). ii) If the columns of A are linearly dependent then dim N (A) 1 and there are infinitely many solutions. One can then constrain the solution by asking, for example, for the minimum norm solution. iii) If m n the columns of A may or may not be linearly dependent. If m < n then the columns of A must be linearly dependent 49

iv) If rank(a) = m then b R(A). 2) Regardless of the size of the system, if b R(A) there cannot be a solution. If b R(A) then Gaussian elimination will lead to inconsistent equations. Two points of view for finding an approximate solution of Ax = b when b R(A). I. The Least Squares Solution : When the system Ax = b is inconsistent then for any x R n the residual, defined as r(x) b Ax, cannot be zero. In this case it is common to try to minimize the residual (in some sense) over all x R n (or possibly over some specially chosen set of admissible x R n ). We shall consider here only the case of minimizing a norm of the residual which is obtained from an inner product. This means we need to find the minimum of the function f defined by f(x) r(x), r(x) = b Ax, b Ax. Let us assume now that we are dealing with real valued vectors. Then f is a function of the n real variables x 1,..., x n, and calculus tells us that a necessary condition for the minimum is that We find that f(x) = 0. f x j A j, b Ax + b Ax, A j = 0. Since in a real vector space the inner product is symmetric it follows that x must be a solution of A j, Ax = A j, b for j = 1,..., n. If the inner product is the dot product on R n then these n equations can be written in matrix form as A T Ax = A T b If the n n matrix A T A has rank n then dim N (A T A) = 0 and (A T A) 1 exists so that x = (A T A) 1 Ab. 50

This is the least squares solution of Ax = b in Euclidean n-space. If A and hence A T are square and have rank n then A T is invertible and x solves Ax = b. II. We know that we can solve Ax = b for any b R(A) since Gaussian elimination will give the answer. One may now pose the problem: Find the solution x of Ax = b where b is the vector in R(A) which is closest in norm to b. As we saw in module 4 the vector b is the orthogonal projection of b onto span{a 1,..., A n }. Thus b = where α is computed from n α j A j = A α A α = d with A ij = A j, A i and d i = b, A i. It follows that A and d can be written in matrix notation as A = A T A, d = A T b so that by inspection the solution of Ax = b = Aα = A(A T A) 1 A T b is x = (A T A) 1 A T b provided A has rank n. Hence the least squares solution is the exact solution of the closest linear system for which there is an exact solution. 51

Module 8 - Homework 1) Let V 1 = {u : u C 0 [ 1, 1]} Define V 2 = C 0 [ 1, 1] (Lu)(t) = t 1 su(s)ds. Show that L is linear and find N (L). Show that the range of L is not all of V 2. 2) Let What is the rank of A? 1 5 9 13 6 2 6 10 14 8 A =. 3 7 11 15 10 4 8 12 16 12 Find an orthogonal (with respect to the dot product) basis of the null space and range of A. 3) Let A be an m n matrix. Assume that its columns are linearly independent. i) Show that in this case n m. ii) Show that one can find an n m matrix B such that BA = I n where I n is the n n identity matrix. 4) Suppose the cost C(t) of a process grows quadratically with time, i.e., C(t) = a 0 + a 1 t + a 2 t 2 Company records contain the following data: time taken measured cost.1.911.2.84.3.788.4.76.5.747.6.77 What would be your estimate of the cost of the process if it takes one unit of time? 52