Vector Space Concepts

Similar documents
Normed & Inner Product Vector Spaces

Pseudoinverse and Adjoint Operators

Pseudoinverse & Moore-Penrose Conditions

ECE 275A Homework #3 Solutions

Pseudoinverse & Orthogonal Projection Operators

LINEAR ALGEBRA REVIEW

Solutions to Math 51 First Exam April 21, 2011

MAT Linear Algebra Collection of sample exams

Linear equations in linear algebra

The definition of a vector space (V, +, )

MATH2210 Notebook 3 Spring 2018

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

2. Every linear system with the same number of equations as unknowns has a unique solution.

1 9/5 Matrices, vectors, and their applications

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

GEOMETRY OF MATRICES x 1

Chapter 2 Subspaces of R n and Their Dimensions

Study Guide for Linear Algebra Exam 2

Lecture Summaries for Linear Algebra M51A

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

7. Dimension and Structure.

Lecture notes: Applied linear algebra Part 1. Version 2

Abstract Vector Spaces

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MTH 362: Advanced Engineering Mathematics

Linear Algebra Highlights

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Lecture 21: 5.6 Rank and Nullity

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Linear Algebra. Min Yan

Abstract Vector Spaces and Concrete Examples

Chapter 2. General Vector Spaces. 2.1 Real Vector Spaces

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Chapter 4 & 5: Vector Spaces & Linear Transformations

SYMBOL EXPLANATION EXAMPLE

Math 308 Practice Test for Final Exam Winter 2015

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Lecture 3: Linear Algebra Review, Part II

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Math 321: Linear Algebra

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Review of some mathematical tools

ECE 275A Homework # 3 Due Thursday 10/27/2016

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Solutions to Homework 5 - Math 3410

NOTES on LINEAR ALGEBRA 1

Math 313 Chapter 5 Review

Math Linear algebra, Spring Semester Dan Abramovich

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

The following definition is fundamental.

Chapter 2: Matrix Algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Tues Feb Vector spaces and subspaces. Announcements: Warm-up Exercise:

Criteria for Determining If A Subset is a Subspace

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Math 369 Exam #2 Practice Problem Solutions

Vector Spaces and Subspaces

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Vector Spaces and Subspaces

Chapter 2 Linear Transformations

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Lecture 22: Section 4.7

1. General Vector Spaces

Vector Derivatives and the Gradient

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Linear Algebra. Preliminary Lecture Notes

Chapter 3. Vector spaces

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

EE731 Lecture Notes: Matrix Computations for Signal Processing

LINEAR ALGEBRA SUMMARY SHEET.

Lecture 1: Review of linear algebra

Maths for Signals and Systems Linear Algebra in Engineering

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Linear Algebra Massoud Malek

Notes on Row Reduction

The Four Fundamental Subspaces

Algorithms to Compute Bases and the Rank of a Matrix

Linear Algebra. Preliminary Lecture Notes

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Transcription:

Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25

Vector Space Theory What are Vectors and Linear Vector Spaces? What are Norms and Normed Linear Vector Spaces? (Theory of Banach Spaces.) What are Inner Products and Inner Product Vector Spaces? (Theory of Hilbert Spaces.) What are Linear Operators and the Geometry Induced by Linear Operators. (The Four Fundamental Subspaces associated with a linear operator.) What is a Linear Inverse Problem? (Well-Posed and Ill-Posed Inverse Problems.) How does one solve a linear inverse problem? Minimum Norm Solution and Weighted Least Squares Solution. Projection Theorem in Hilbert Spaces. (Orthogonality Principle.) Generalized Inverses. (Pseudo-Inverse, QR-factorization, SVD.) Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 2 / 25

Heuristic Concept of a Linear Vector Space Many important physical, engineering, biological, sociological, economic, scientific quantities, which we call vectors, have the following conceptual properties. There exists a natural or conventional zero point or origin, the zero vector, 0,. Vectors can be added in a symmetric commutative and associative manner to produce other vectors z = x + y = y + x, x,y,z are vectors (commutativity) x + y + z x + (y + z) = (x + y) + z, x,y,z are vectors (associativity) Vectors can be scaled by the (symmetric) multiplication of scalars (scalar multiplication) to produce other vectors z = αx = xα, x,z are vectors, α is a scalar (scalar multiplication of x by The scalars can be members of any fixed field (such as the field of rational polynomials). We will work only with the fields of real and complex numbers. Each vector x has an additive inverse, x = ( 1)x x x x + ( 1)x = x + ( x) = 0 Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 3 / 25

Formal Concept of a Linear Vector Space A Vector Space, X, is a set of vectors, x X, over a field, F, of scalars. If the scalars are the field of real numbers, then we have a Real Vector Space. If the scalars are the field of complex numbers, then we have a Complex Vector Space. Any vector x X can be multiplied by an arbitrary scalar α to form α x = x α X. This is called scalar multiplication. Note that we must have closure of scalar multiplication. I.e, we demand that the new vector formed via scalar multiplication must also be in X. Any two vectors x, y X can be added to form x + y X where the operation + of vector addition is associative and commutative. Note that we must have closure of vector addition. The vector space X must contain an additive identity (the zero vector 0) and, for every vector x, an additive inverse x. In this course we primarily consider finite dimensional vector spaces dim X = n < and mostly give results appropriate for this restriction. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 4 / 25

Linear Vector Spaces Cont. Any vector x in an n-dimensional vector space can be represented (with respect to an appropriate basis see below) as an n-tuple (n 1 column vector) over the field of scalars, x = ξ 1. ξ n X = F n = C n or R n. We refer to this as a canonical representation of a finite-dimensional vector. We often (but not always) assume that vectors in an n-dimensional vector space are canonically represented by n 1 column vectors. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 5 / 25

Linear Vector Spaces Cont. Any linear combination of arbitrarily selected vectors x 1,, x r drawn from the space X α 1 x 1 + + α r x r for arbitrary r, and scalars α i, i = 1,, r, must also be a vector in X. This is easily shown via induction using the properties of closure under pairwise vector addition, closure under scalar multiplication, and associativity of vector addition. This global closure of linear combinations property of X (i.e., the property holds everywhere on X ) is why we often refer to X as a (globally) Linear Vector Space. This is in contradistinction to locally linear spaces, such as differentiable manifolds, of which the surface of a ball is the classic example of a space which is locally linear (flat) but globally curved. Some important physical phenomenon of interest cannot be modeled by linear vector spaces, the classic example being rotations of a rigid body in three dimensional space (this is because finite (i.e., non-infinitesimal) rotations do not commute.) Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 6 / 25

Examples of Vectors Voltages, Currents, Power, Energy, Forces, Displacements, Velocities, Accelerations, Temperature, Torques, Angular Velocities, Income, Profits,..., can all be modeled as vectors. Example: Set of all m n matrices. Define matrix addition by component-wise addition and scalar multiplication by component-wise multiplication of the matrix component by the scalar. This is easily shown to be a vector space. We can place the elements of this mn-dimensional vector space into canonical form by stacking the columns of an m n matrix A to form an mn 1 column vector denoted by vec(a) (sometimes also denoted by stack(a)). Example: Take X = {f (t) = x 1 cos(ω 1 t) + x 2 cos(ω 2 t) for < t < ; x 1, x 2 R; ω 1 ω 2 } and define vector addition and scalar multiplication component ( ) wise. Note that x1 any vector f X has a canonical representation x = R x 2. Thus 2 X = X R 2, and without loss of generality (wlog) we often work with X in lieu of X. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 7 / 25

Examples of Vectors Cont. Important Example: Set of all Functions forms a Vector Space Consider functions (say of time t) f and g, which we sometimes also denote as f ( ) and g( ). f (t) is the value of the function f at time t. (Think of f (t) as a sample of f taken at time t.) Strictly speaking, then, f (t) is not the function f itself. Functions are single-valued by definition. Therefore f (t) = g(t), t f = g I.e., functions are uniquely defined once we know their output values for all possible input values t We can define vector addition to create a new function h = f + g by specifying the value of h(t) for all t, which we do as follows: h(t) = (f + g)(t) f (t) + g(t), t We define scalar multiplication of the function f by the scalar α to create a new function g = (αf ) via (αf )(t) = α f (t), t Finally we define the zero function o as the function that maps to the scalar value 0 for all t, o(t) = 0, t. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 8 / 25

Vector Subspaces A subset V X is a subspace of a vector space X if it is a vector space in its own right. If V is a subspace of a vector space X, we call X the parent space or ambient space of V. It is understood that a subspace V inherits the vector addition and scalar multiplication operations from the ambient space X. To be a subspace, V must also inherit the zero vector element. Given this fact, to determine if a subset V is also a subspace one needs to check that every linear combination of vectors in V yields a vector in V. This latter property is called the property of closure of the subspace V under linear combinations of vectors in V. Therefore if closure fails to hold for a subset V, then V is not a vector subspace. Note that testing for closure includes as a special case testing whether the zero vector belongs to V. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 9 / 25

Vector Subspaces - Cont. Consider the complex vector space X = complex n n matrices, n > 1, with matrix addition and scalar multiplication defined component-wise. Are the following subsets of X vector subspaces? V = upper triangular matrices. This is a subspace as it is closed under the operations of scalar multiplication and vector addition inherited from X. V = positive definite matrices. This is not a subspace as it is not closed under scalar multiplication. (Or, even simpler, it does not contain the zero element.) V = symmetric matrices, A = A T. This is a subspace as it is closed under the operators inherited from X. V = hermitian matrices, A = A H (the set of complex symmetric matrices where A H = (A T ) = (Ā)T ). This is not a subspace as it is not closed under scalar multiplication (check this!). It does include the zero element. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 10 / 25

Subspace Sums Given two subsets V and W of vectors, we define their set sum by V + W = {v + w v V and w W}. Let the sets V and W in addition both be subspaces of X. In this case we call V + W a subspace sum and we have V W and V + W are also subspaces of X V W V + W where in general V W is not a subspace. In general, we have the following ordering of subspaces, 0 {0} V W V + W X, where {0} is the trivial subspace consisting only of the zero vector (additive identity) of X. The trivial subspace has dimension zero. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 11 / 25

Linear Independence By definition r vectors x 1,, x r X are linearly independent when, α 1 x 1 + + α r x r = 0 if and only if α 1 = = α r = 0 Suppose this condition is violated because (say) α 1 0, then we have x 1 = 1 α 1 (α 2 x 2 + + α r x r ) A collection of vectors are linearly dependent if they are not linearly independent. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 12 / 25

Linear Independence - Cont. Assume that the r vectors, x i, are canonically represented, x i F n. Then the definition of linear independence can be written in matrix-vector form as X α = ( ) x 1 x r α 1. α r = 0 α α 1. α r = 0 Thus x 1,, x r are linearly independent iff the associated n r matrix X (x 1 x r ) has full column rank (equivalently, iff the null space of X is trivial). Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 13 / 25

Span of a Set of Vectors The span of the collection x 1,, x r X is the set of all linear combinations of the vectors, Span {x 1,, x r } = {y y = α 1 x 1 + + α r x r = X α, α F r } X The set V = Span {x 1,, x r } is a vector subspace of X. If, in addition, the spanning vectors x 1,, x r are linearly independent we say that the collection is a linearly independent spanning set or a basis for the subspace V. We denote a basis for a subspace V by B V = {x 1,, x r } Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 14 / 25

Basis and Dimension Given a basis for a vector space or subspace, the number of basis vectors in the basis is unique. For a given space or subspace, there are many different bases, but they must all have the same number of vectors. This number, then, is an intrinsic property of the space itself and is called the dimension d = dim V of the space or subspace V. If the number of elements, d, in a basis is finite, we say that the space is finite dimensional, otherwise we say that the space is infinite dimensional. Linear algebra is the study of linear mappings between finite dimensional vector spaces. The study of linear mappings between infinite dimensional vector spaces is known as Linear Functional Analysis or Linear Operator Theory. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 15 / 25

Basis and Dimension Cont. The dimension of the trivial subspace is zero, 0 = dim{0}. If V is a subspace of X, V X, we have dim V dim X. In general for two arbitrary subspaces V and W of X we have, dim (V + W) = dim V + dim W dim (V W), and 0 dim (V W) dim (V + W) dim X. Furthermore, if X = V + W then, dim X dim V + dim W, with equality if and only if V W = {0}. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 16 / 25

Independent Subspaces and Projections Two subspaces, V and W, of a vector space X are independent or disjoint when V W = {0}. In this case we have dim (V + W) = dim V + dim W. If X = V + W for two independent subspaces V and W we say that V and W are companion subspaces and we write, In this case dim X = dim V + dim W. X = V W. Given two companion subspaces V and W any vector x X can be written uniquely as x = v + w, v V and w W. The unique component v is called the projection of x onto V along its companion space W. The unique component w is called the projection of x onto W along its companion space V. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 17 / 25

Independent Subspaces and Projections Cont. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 18 / 25

Projection Operators Given the unique decomposition of a vector x along two companion subspaces V and W, x = v + w, we define the companion projection operators P V W and P W V by, P V W x v and P W V x = w Obviously P V W + P W V = I. I.e., P V W = I P W V. It is straightforward to show that P V W and P W V are both idempotent, P 2 V W = P V W and P 2 W V = P W V where P 2 V W = ( P V W ) ( PV W ). For example P 2 V W x = P V W ( PV W x ) = P V W v = v = P V W x and since this is true for all x X it must be the case that P 2 V W = P V W. It can also be shown that the projection operators P V W and P W V are linear operators. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 19 / 25

Linear Operators and Matrices Consider a function A which maps between two vector spaces X and Y, A : X Y. X is called the input space or the source space or the domain. Y is called the output space or the target space or the codomain. The mapping or operator A is said to be linear if A (α 1 x 1 + α 2 x 2 ) = α 1 Ax 1 + α 2 Ax 2 x 1, x 2 X, α 1, α 2 F. Note that in order for this definition to be well-posed the vector spaces X and Y both must have the same field of scalars F. For example, X and Y must be both real vectors spaces, or must be both complex vector spaces. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 20 / 25

Linear Operators and Matrices - Cont. It is well-known that any linear operator between finite dimensional vectors spaces has a matrix representation. In particular if n = dim X < and m = dim Y < for two vector spaces over the field F, then a linear operator A which maps between these two spaces has an m n matrix representation over the field F. Note that projection operators on finite-dimensional vector spaces must have matrix representations as a consequence of their linearity. Often, for convenience, we assume that any such linear mapping A is an m n matrix and we write A F m n. Example: Differentiation as a linear mapping between 2nd order polynomials b + 2c x = d ( a + b x + c x 2 ) b 0 1 0 a 2c = 0 0 2 b dx 0 0 0 0 c using the simple polynomial basis functions 1, x, and x 2. If a different set of polynomial basis functions are used, then we would have a different vector-matrix representation of the differentiation. Again we note: representations of vectors and operators are basis dependent. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 21 / 25

Two Linear Operator Induced Subspaces Every linear operator has two natural vector subspaces associated with it. The Range Space (or Image), The Nullspace (or Kernel), R(A) A(X ) {y y = Ax, x X } Y, N (A) = {x Ax = 0} X. Note that the nullspace is a subspace of the source space (domain), while the range space is a subspace of the target space (the codomain). It is straightforward to show that N (A) and R(A) are linear subspaces using the fact that A is a linear operator. When attempting to solve a linear problem y = Ax, a solution exists if and only if y R(A). If y R(A) we say that the problem is consistent. Otherwise the problem is inconsistent. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 22 / 25

Two Linear Operator Induced Subspaces Cont. The dimension of the range space of a linear operator A is the rank of A, r(a) = rank(a) = dim R(A), The dimension of the nullspace of a linear operator A is the nullity of A, ν(a) = nullity(a) = dim N (A), The rank and nullity of a linear operator A have unique values which are independent of the specific matrix representation of A. They are intrinsic properties of the linear operator A and invariant with respect to all changes of representation. Note that, as dimensions, the rank and nullity must take on nonnegative integer or zero values. Given a matrix representation for A F m n, standard undergraduate courses in linear algebra explain how to determine the rank and nullity via LU factorization (aka Gaussian elimination) to place a matrix into upper echelon form. The rank, r = r(a) is then given by the number of nonzero pivots while the nullity, ν = ν(a), is given by ν = n r. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 23 / 25

Linear Forward and Inverse Problem Given a linear mapping between two vector spaces A : X Y the problem of computing an output y in the codomain given an input vector x in the domain, Ax = y is called the forward problem. The forward problem is typically well-posed in that knowing A and given x one can construct y by (say) a straightforward matrix-vector multiplication. Given a vector y in the codomain, the problem of determining an x in the domain for which y = Ax is known as an inverse problem. Solving the linear inverse problem is much harder than solving the forward problem, even when the problem is well-posed. Furthermore the inverse problem is often ill-posed compounding the problem difficulty Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 24 / 25

Well-Posed and Ill-Posed Linear Inverse Problems Given am m-dimensional vector y in the codomain, the inverse problem of determining an n-dimensional vector x in the domain for which Ax = y is said to be well-posed if and only if the following three conditions are true for the linear mapping A: 1 y R(A) for all y Y so that a solution exists for all y. I.e., we demand that A be onto, R(A) = Y or, equivalently, that r(a) = m. It it not enough to merely require consistency for a given y because even the tiniest error or misspecification in y can render the problem inconsistent. 2 If a solution exists, we demand that it be unique. I.e., we demand that that A be one-to-one, N (A) = {0}. Equivalently, ν(a) = 0. 3 The solution x does not depend sensitively on the value of y. I.e., we demand that A be numerically well-conditioned. If any of these three conditions is violated we say that the inverse problem is ill-posed. Condition three is studied in great depth in courses on Numerical Linear Algebra. In this course, we ignore the numerical conditioning problem and focus on the first two conditions only. Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 25 / 25