Tues Feb Vector spaces and subspaces. Announcements: Warm-up Exercise:

Similar documents
6. The scalar multiple of u by c, denoted by c u is (also) in V. (closure under scalar multiplication)

Criteria for Determining If A Subset is a Subspace

is any vector v that is a sum of scalar multiples of those vectors, i.e. any v expressible as v = c 1 v n ... c n v 2 = 0 c 1 = c 2

Vector space and subspace

Vector space and subspace

Math Week 1 notes

Finish section 3.6 on Determinants and connections to matrix inverses. Use last week's notes. Then if we have time on Tuesday, begin:

Row Space, Column Space, and Nullspace

Math 3191 Applied Linear Algebra

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

Mon Jan Matrix and linear transformations. Announcements: Warm-up Exercise:

Math 54. Selected Solutions for Week 5

Exercise 2) Consider the two vectors v 1. = 1, 0, 2 T, v 2

Math 4377/6308 Advanced Linear Algebra

Study Guide for Linear Algebra Exam 2

Kevin James. MTHSC 3110 Section 4.3 Linear Independence in Vector Sp

Mon Mar matrix eigenspaces. Announcements: Warm-up Exercise:

Chapter 1. Vectors, Matrices, and Linear Spaces

Sept. 26, 2013 Math 3312 sec 003 Fall 2013

5.1 Second order linear differential equations, and vector space theory connections.

Math Linear algebra, Spring Semester Dan Abramovich

1 Last time: inverses

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

Abstract Vector Spaces and Concrete Examples

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Wed Feb The vector spaces 2, 3, n. Announcements: Warm-up Exercise:

The definition of a vector space (V, +, )

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

4 Vector Spaces. Space Flight and Control Systems

MATH2210 Notebook 3 Spring 2018

Dr. Abdulla Eid. Section 4.2 Subspaces. Dr. Abdulla Eid. MATHS 211: Linear Algebra. College of Science

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

if we factor non-zero factors out of rows, we factor the same factors out of the determinants.

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Mon Feb Matrix inverses, the elementary matrix approach overview of skipped section 2.5. Announcements: Warm-up Exercise:

Math 3191 Applied Linear Algebra

Linear Algebra (Math-324) Lecture Notes

1.3 Linear Dependence & span K

CSL361 Problem set 4: Basic linear algebra

Chapter 3. Vector spaces

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

MTH 362: Advanced Engineering Mathematics

Outline. Linear maps. 1 Vector addition is commutative: summands can be swapped: 2 addition is associative: grouping of summands is irrelevant:

Vector Spaces - Definition

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Linear Algebra (Part II) Vector Spaces, Independence, Span and Bases

Dimension. Eigenvalue and eigenvector

Test 3, Linear Algebra

GENERAL VECTOR SPACES AND SUBSPACES [4.1]

Linear independence, span, basis, dimension - and their connection with linear systems

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

6.4 BASIS AND DIMENSION (Review) DEF 1 Vectors v 1, v 2,, v k in a vector space V are said to form a basis for V if. (a) v 1,, v k span V and

5) homogeneous and nonhomogeneous systems 5a) The homogeneous matrix equation, A x = 0 is always consistent.

Math 2331 Linear Algebra

17. C M 2 (C), the set of all 2 2 matrices with complex entries. 19. Is C 3 a real vector space? Explain.

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Linear Independence. Linear Algebra MATH Linear Algebra LI or LD Chapter 1, Section 7 1 / 1

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

LINEAR ALGEBRA REVIEW

ARE211, Fall2012. Contents. 2. Linear Algebra (cont) Vector Spaces Spanning, Dimension, Basis Matrices and Rank 8

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Math 313 Chapter 5 Review

Math 24 Winter 2010 Sample Solutions to the Midterm

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

Chapter 2. General Vector Spaces. 2.1 Real Vector Spaces

SYMBOL EXPLANATION EXAMPLE

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Math 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements

is Use at most six elementary row operations. (Partial

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Math 54 HW 4 solutions

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Carleton College, winter 2013 Math 232, Solutions to review problems and practice midterm 2 Prof. Jones 15. T 17. F 38. T 21. F 26. T 22. T 27.

Math 4310 Solutions to homework 1 Due 9/1/16

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

4 Vector Spaces. 4.1 Basic Definition and Examples. Lecture 10

Math Week 1 notes

1. Introduction to commutative rings and fields

Math 544, Exam 2 Information.

Math 4377/6308 Advanced Linear Algebra

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar.

Theorem 3: The solution space to the second order homogeneous linear differential equation y p x y q x y = 0 is 2-dimensional.

Examples: u = is a vector in 2. is a vector in 5.

Abstract Vector Spaces

Overview. Motivation for the inner product. Question. Definition

0.2 Vector spaces. J.A.Beachy 1

1. Introduction to commutative rings and fields

Vector Space Concepts

Math 3191 Applied Linear Algebra

Linear algebra and differential equations (Math 54): Lecture 10

Math 2331 Linear Algebra

Announcements Wednesday, November 01

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Transcription:

Math 2270-004 Week 7 notes We will not necessarily finish the material from a given day's notes on that day. We may also add or subtract some material as the week progresses, but these notes represent an in-depth outline of what we plan to cover. These notes cover material in 4.1-.4.3. Tues Feb 20 4.1 Vector spaces and subspaces. Announcements: Warm-up Exercise:

We dealt with vector equations - linear combination equations - a lot, in Chapters 1-2. There are other objects, for example functions, that one can add and scalar mutiply. The useful framework that includes these other examples is the abstract concept of a vector space. There is a body of results that only depends on the axioms below, and not on the particular vector space in which one is working. So rather than redo those results in different-seeming contexts, we understand them once abstractly, in order to apply them in many places. And, we can visualize the abstractions be relying on our solid foundation in n. Definition A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and scalar multiplication, so that the ten axioms listed below hold. These axioms must hold for all vectors u, v, w in V, and for all scalars c, d. 1. The sum of u and v, denoted by u v, is (also) in V (closure under addition.) 2. u v = v u (commutative property of addition) 3. u v w = u v w (associative property of addition) 4. There is a zero vector 0 in V so that u 0 = u. (additive identity) 5. For each u V there is a vector u V so that u u = 0. (additive inverses) 6. The scalar multiple of u by c, denoted by c u is (also) in V. (closure under scalar multiplication) 7. c u v = c u c v (scalar multiplication distributes over vector addition) 8. c d u = c u d u. (scalar addition distributes over scalar multiplication of vectors) 9. c d u = c d u (associative property of scalar multiplication) 10. 1 u = u (multiplicative identity) The following three algebra rules follow from the first 10, and are also useful: 11) 0 u = 0. 12) c 0 = 0. 13) u = 1 u.

Example 1 n, n 1 a positive integer, with vector addition and scalar multiplication defined componentwise, and with the zero vector being the vector which has every entry equal to zero. Then the properties above just reduce to properties of real number algebra. Example 2 The space of m n matrices (with m, n fixed), with matrix addition and scalar multiplication defined component-wise, and with the zero vector being the matrix which has every entry equal to zero. Example 3 The set S of doubly-infinite sequences of numbers. (Think of this as discrete time signals.) So an element of S can be written as y k =... y 2, y 1, y 0, y 1, y 2, y 3,... If z k is another element of, then the sum y k z k is the sequence y k z k, and the scalar multiple c y k is the sequence c y k. Checking the vector space axioms is the same as it would be n, since addition and scalar multiplication are done entry by entry. (There's just infinitely many entries.) One can visualize such a discrete time signal in terms of its graph over over its integer "domain". How could you visualize "vector" addition and scalar multiplication in this case?

Example 4 Let V be the set of all real-value functions defined on a domain D on the real line. (In practice D is usually an interval or the entire real line.) If f and g are in V, then we define their sum and scalar multiples "component-wise" too, where each t D gives a component: f g t f t g t c f t c f t. So for example, the domain is the entire real line and if f is defined by the rule f t = 1 g is defined by the rule g t = 3 e t then the function f g is defined by the rule 2 sin 3 t and f g t = 1 2 sin 3 t 3 e t and 7 f is the function defined by the rule 7 f t = 7 1 2 sin 3 t = 7 14 sin 3 t. Why do the vector space axioms hold? What is the zero vector in this vector space? What is the additive inverse function? One can visualize functions in terms of their graphs - just like for the discrete time signals - and then the graphs of the sum of two functions or of a scalar multiple of one function, are contructed as you'd expect:

Definition: A subspace of a vector space V is a subset H of V which is itself a vector space with respect to the addition and scalar multiplication in V. As soon as one verifies a), b), c) below for H, it will be a subspace, because H will "inherit" the other axioms just by being contained in V. a) The zero vector of V is in H b) H is closed under vector addition, i.e. for each u H, v H then u v H. c) H is closed under scalar multiplication, i.e for each u H, c, then also c u H. Just to double check that the other properties get inherited: Definition A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and scalar multiplication, so that the ten axioms listed below hold. These axioms must hold for all vectors u, v, w in V, and for all scalars c, d. 1. The sum of u and v, denoted by u v, is (also) in V (closure under addition.) 2. u v = v u (commutative property of addition) 3. u v w = u v w (associative property of addition) 4. There is a zero vector 0 in V so that u 0 = u. (additive identity) 5. For each u V there is a vector u V so that u u = 0. (additive inverses) 6. The scalar multiple of u by c, denoted by c u is (also) in V. (closure under scalar multiplication) 7. c u v = c u c v (scalar multiplication distributes over vector addition) 8. c d u = c u d u. (scalar multiplication distributes over scalar addition) 9. c d u = c d u (associative property of scalar multiplication) 10. 1 u = u (multiplicative identity) The following three algebra rules follow from the first 10, and are also useful: 11) 0 u = u. 12) c 0 = 0. 13) u = 1 u.

Big Exercise: The vector space n has subspaces! But there aren't very many kinds, it turns out. (Even though there are countless kinds of subsets of n.) Let's find all the possible kinds of subspaces of 3, using our expertise with matrix reduced row echelon form.

Wed Feb 21 4.1-4.2 Vector spaces and subspaces; null spaces, column spaces, and the connections to linear transformations Announcements: Warm-up Exercise:

We've been discussing the abstract notions of vector spaces and subspaces, with some specific examples to help us with our intuition. Today we continue that discussion. We'll continue to use exactly the same language we used in Chapters 1-2... except now it's for general vector spaces: Let V be a vector space (Do you recall that definition, at least roughly speaking?) Definition: If we have a collection of p vectors v 1, v 2,... v p in V, then any vector v V that can be expressed as a sum of scalar multiples of these vectors is called a linear combination of them. In other words, if we can write v = c 1 v 1 c 2 v 2... c p v p, then v is a linear combination of v 1, v 2,... v p. The scalars c 1, c 2,..., c p are called the linear combination coefficients or weights. Definition The span of a collection of vectors, written as span v 1, v 2,... v p combinations of those vectors., is the collection of all linear Definition: a) An indexed set of vectors v 1, v 2,... v p in V is said to be linearly independent if no one of the vectors is a linear combination of (some) of the other vectors. The concise way to say this is that the only way 0 can be expressed as a linear combination of these vectors, c 1 v 1 c 2 v 2... c p v p = 0, is for all of the weights c 1 = c 2 =... = c p = 0. b) An indexed set of vectors v 1, v 2,... v p is said to be linearly dependent if at least one of these vectors is a linear combination of (some) of the other vectors. The concise way to say this is that there is some way to write 0 as a linear combination of these vectors c 1 v 1 c 2 v 2... c p v p = 0 where not all of the c j = 0. (We call such an equation a linear dependency. Note that if we have any such linear dependency, then any v j with c j 0 is a linear combination of the remaining v k with k j. We say that such a v j is linearly dependent on the remaining v k.)

And from yesterday, Definition: A subspace of a vector space V is a subset H of V which is itself a vector space with respect to the addition and scalar multiplication in V. As soon as one verifies a), b), c) below for H, it will be a subspace. a) The zero vector of V is in H b) H is closed under vector addition, i.e. for each u H, v H then u v H. c) H is closed under scalar multiplication, i.e for each u H, c, then also c u H. Theorem (spans are subspaces) Let V be a vector space, and let v 1, v 2,... v n be a set of vectors in V. Then H = span v 1, v 2,... v n is a subspace of V. proof: We need to check that for H = span v 1, v 2,... v n a) The zero vector of V is in H b) H is closed under vector addition, i.e. for each u H, v H then u v H. c) H is closed under scalar multiplication, i.e for each u H, c, then also c u H.

Remark Using minimal spanning sets was how we were able to characterize all possible subspace of 3 yesterday (or today, if we didn't finish on Tuesday). Can you characterize all possible subsets of n in this way? Example: Let P n be the space of polynomials of degree at most n, P n = p t = a 0 a 1 t a 2 t 2... a n t n such that a 0, a 1,... a n Note that P n is the span of the n 1 functions p 0 t = 1, p 1 t = t, p 2 t = t 2,... p n t = t n. Although we often consider P n as a vector space on its own, we can also consider it to be a subspace of the much larger vector space V of all functions from to. Exercise 1 abbreviating the functions by their formulas, we have P 3 = span 1, t, t 2, t 3. Are the functions in the set 1, t, t 2, t 3 linearly independent or linearly dependent?.

4.2 Null spaces, column spaces, and linear transformations from n to m. Definition Let A be an m n matrix, expressed in column form as A = a 1 a 2 a 3... a n ] The column space of A, written as Col A, is the span of the columns: Col A = span a 1 a 2 a 3... a n. Equivalently, since A x = x 1 a 1 x 2 a 2... x n a n we see that Col A is also the range of the linear transformation T : n m given by T x = A x, i.e Col A = b m such that b = A x for some x n. Theorem By the "spans are subspaces" theorem, Col A is always a subspace of m. Exercise 2a) Consider 1 2 0 1 1 A = 2 4 0 2 2. 3 6 4 1 7 By the Theorem, col A is a subspace of 3. Which is it: origin, or all of 3. Hint: 0, a line thru the origin, a plane thru the 1 2 0 1 1 1 2 0 1 1 2 4 0 2 2 reduces to 0 0 1 1 1. 3 6 4 1 7 0 0 0 0 0 2b) Is there a more efficient way to express Col A as a span that doesn't require all five column vectors?

Definition: If a set of vectors v 1, v 2,... v n in a vector space V is linearly independent and also spans V, then the collection is called a basis for V. Exercise 3 Exhibit a basis for col A in Exercise 2. Exercise 4 Exhibit a basis for P 3 in Exercise 1

Fri Feb 23 4.2-4.3 nullspaces and column spaces; kernel and range of linear transformations as subspaces. Linearly independent sets and bases for vector spaces. Announcements: Warm-up Exercise:

We've seen that one (explicit) way that subspaces arise is as the span of a specified collection of vectors. The primary (implicit) way that subspaces are described is related to the following: Definition: The null space of an m n matrix A is the set of x n for which A x = 0. We denote this set by Nul A. Equivalently, in terms of the associated linear transformation T : n m given by T x = A x, Nul A is the set of points in the domain which are transformed into the zero vector in the codomain. Theorem Let A be an m n matrix. Then Nul A is a subspace of n. proof: We need to check that for H = Nul A : a) The zero vector of V is in H b) H is closed under vector addition, i.e. for each u H, v H then u v H. c) H is closed under scalar multiplication, i.e for each u H, c, then also c u H.

Exercise 1a) For the same matrix A as in Exercise 2 from Wednesday's notes, express the vectors in Nul A explicitly, using the methods of Chapters 1-2. Notice these are vectors in the domain of the associated linear transformation T : 5 3 given by T x = A x, so are a subspace of 5. 1 2 0 1 1 1 2 0 1 1 A = 2 4 0 2 2 reduces to 0 0 1 1 1. 3 6 4 1 7 0 0 0 0 0 1b) Exhibit a basis for Nul A.

The ideas of nullspace and column space generalize to arbitrary linear transformations between vectors spaces - with slightly more general terminology. Definition Let V and W be vector spaces. A function T : V each x V there is a unique vector T x W and so that W is called a linear transformation if for (i) T u v = T u T v for all u, v V (ii) T c u = c T u for all u V, c Definition The kernel (or nullspace) of T is defined to be u V : T u = 0. Definition The range of T is w W : w = T v for some v V. Theorem Let T : V W be a linear transformation. Then the kernel of T is a subspace of V. The range of T is a subspace of W. Remark: The theorem generalizes our earlier one about Nul A and Col A, for matrix transformations T : n m, T x = A x.

Exercise 2 Let V be the vector space of real-valued functions f defined on an interval a, b with the property that they are differentiable and that their derivatives are continuous functions on a, b. Let W be the vector space C a, b of all continous functions on the interval a, b. Let D : V W be the derivative transformation D f = f. 2a) What Calculus differentiation rules tell you that D is a linear transformation? 2b) What subspace is the kernel of D? 2c) What is the range of D?