Chapter 1. Introduction to Vectors. Po-Ning Chen, Professor. Department of Electrical and Computer Engineering. National Chiao Tung University

Similar documents
1 Matrices and matrix algebra

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21

Chapter 3. Vector Spaces and Subspaces. Po-Ning Chen, Professor. Department of Electrical and Computer Engineering. National Chiao Tung University

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34

Chapter 2. Solving Linear Equations. Po-Ning Chen, Professor. Department of Computer and Electrical Engineering. National Chiao Tung University

Mathematics for Graphics and Vision

Vectors Summary. can slide along the line of action. not restricted, defined by magnitude & direction but can be anywhere.

Introduction to Matrix Algebra

Review of Linear Algebra

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Matrices. Chapter Definitions and Notations

CHAPTER 4. APPLICATIONS AND REVIEW IN TRIGONOMETRY

11.8 Vectors Applications of Trigonometry

MAC Module 5 Vectors in 2-Space and 3-Space II

Ch. 7.3, 7.4: Vectors and Complex Numbers

Lecture 3: Matrix and Matrix Operations

Lecture 23: 6.1 Inner Products

Section 1.6. M N = [a ij b ij ], (1.6.2)

Designing Information Devices and Systems I Discussion 2A

Chapter 2. Matrix Arithmetic. Chapter 2

REVIEW - Vectors. Vectors. Vector Algebra. Multiplication by a scalar

1. Vectors.

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

The geometry of least squares

An Introduction to Matrix Algebra

1.1 Single Variable Calculus versus Multivariable Calculus Rectangular Coordinate Systems... 4

Vectors a vector is a quantity that has both a magnitude (size) and a direction

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Elementary Linear Algebra

LINEAR ALGEBRA - CHAPTER 1: VECTORS

Lecture 20: 6.1 Inner Products

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Basic Linear Algebra in MATLAB

Matrices and Vectors

Calculus II - Basic Matrix Operations

Vectors and Fields. Vectors versus scalars

Phys 201. Matrices and Determinants

Vector Operations. Vector Operations. Graphical Operations. Component Operations. ( ) ˆk

Lecture 6: Geometry of OLS Estimation of Linear Regession

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Omm Al-Qura University Dr. Abdulsalam Ai LECTURE OUTLINE CHAPTER 3. Vectors in Physics

Vector Algebra II: Scalar and Vector Products

Matrices BUSINESS MATHEMATICS

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions

Review from Bootcamp: Linear Algebra

Remark 3.2. The cross product only makes sense in R 3.

Lecture 2: Vector-Vector Operations

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

x 1 x 2. x 1, x 2,..., x n R. x n

Review: Linear and Vector Algebra

Linear Algebra V = T = ( 4 3 ).

UNIT-05 VECTORS. 3. Utilize the characteristics of two or more vectors that are concurrent, or collinear, or coplanar.

LINEAR ALGEBRA W W L CHEN

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

Vector Calculus. Lecture Notes

Elementary maths for GMT

General Physics I, Spring Vectors

1. Vectors and Matrices

Basic Concepts in Matrix Algebra

Introduction and Vectors Lecture 1

Precalculus. Precalculus Higher Mathematics Courses 85

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

A matrix over a field F is a rectangular array of elements from F. The symbol

Intro Vectors 2D implicit curves 2D parametric curves. Graphics 2011/2012, 4th quarter. Lecture 2: vectors, curves, and surfaces

Notation, Matrices, and Matrix Mathematics

Matrix Basic Concepts

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 2

ELEMENTARY LINEAR ALGEBRA

Matrix Arithmetic. (Multidimensional Math) Institute for Advanced Analytics. Shaina Race, PhD

M. Matrices and Linear Algebra

Repeated Eigenvalues and Symmetric Matrices

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Vectors in Function Spaces

Section 2.2: The Inverse of a Matrix

Applied Linear Algebra in Geoscience Using MATLAB

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Review of linear algebra

Section 8.2 Vector Angles

Vectors and Matrices

DM559 Linear and Integer Programming. Lecture 3 Matrix Operations. Marco Chiarandini

4.1 Distance and Length

Vector Geometry. Chapter 5

Section 10.7 The Cross Product

2.1 Scalars and Vectors

Vectors and Matrices Statistics with Vectors and Matrices

Basic Concepts in Linear Algebra

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Fairfield Public Schools

CSCI 239 Discrete Structures of Computer Science Lab 6 Vectors and Matrices

I&C 6N. Computational Linear Algebra

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Review of Basic Concepts in Linear Algebra

Notes: Vectors and Scalars

A Introduction to Matrix Algebra and the Multivariate Normal Distribution

Math Linear Algebra Final Exam Review Sheet

Transcription:

Chapter 1 Introduction to Vectors Po-Ning Chen, Professor Department of Electrical and Computer Engineering National Chiao Tung University Hsin Chu, Taiwan 30010, R.O.C.

Notes for this course 1-1 A few notes are given before the lectures begin. 1. In my slides, A red box usually indicates theorems, lemmas, etc. A blue box usually contains definitions, important notions, etc. A green box usually gives tips, supplement stuff, etc. 2. These slides are prepared mainly for lecturing, and are not intended to be replacements of the textbook. You are suggested to study the textbook and also to self-practice the problems at the end of each section. 3. The slides as well as the textbook are basically in English, and so will the exam. This will force you to get familiar with the linear-algebraic terminologies in English.

1.1 Vectors and their addition/subtraction 1-2 What is a vector? Answer: An ordered list of numbers. Usually, put in column (hence, in particular, it is called column vector.) By dictionary, vector = a quantity with direction. Example. v = [ ] 2 3 How to define addition (or subtraction) of two vectors? Answer: Component-wise addition/subtraction. [ ] [ ] 1 2 Example. Let w = and v =. Then, 1 3 [ ] [ ] [ ] [ ] 1 2 1+2 3 w + v = + = = 1 3 1+3 4 [ ] [ ] [ ] [ ] 1 2 1 2 1 w v = = = 1 3 1 3 2

1.1 Scalar multiplication 1-3 How to define scalar multiplication of a vector? Answer: Component-wise multiplication. Example. Let vector v = [ ] 2 3 c v = 2 and scalar c =2. Then, [ ] 2 3 = [ ] 2 2 2 3 = [ ] 4 6 By dictionary, scalar = a quantity without direction. Scalar-to-vector multiplication can be viewed (and should be viewed) as an extension of vector addition. For example, 2v = v + v and 3v = v + v + v. Here, c is referred to as the scalar. In contrast, c (alone) is a scalar and [c] is a one-dimensional vector.

1.1 Linear combination 1-4 What is linear combination of vectors? Answer: To combine vectors in a linear fashion. [ ] 2 Example (of linear conception). Let v = and c 3 1 =1,c 2 =2andc 3 =3. Then, [ ] [ ] [ ] 2 1 2 2 c 1 v = 1 = = 3 1 3 3 [ ] [ ] [ ] 2 2 2 4 c 2 v = 2 = = 3 2 3 6 [ ] [ ] [ ] 2 3 2 6 c 3 v = 3 = = 3 3 3 9 These three vectors lie along a line on a two-dimensional plane, hence they are linear to each others. Example (of linear combination). Now we wish to combine two vectors in a linear fashion. c v + d w = c [ ] 2 + d 3 [ ] 1 1 is a linear combination of vectors v and w. = [ ] 2c + d 3c + d

1.1 Linear combination 1-5 Why is linear combination important? No answer but a tip: c v can characterize a line, if v is a non-zero vector (or equivalently, v 0). c v + d w can characterize a 2-dimensional plane, if v and w are non-zero vectors and are not aligned. c v + d w + e u can characterize a 3-dimensional volume (or 3-dimensional space), if v, w and u are non-zero vectors and are not aligned. Note: A zero vector 0 can be viewed (and should be viewed or defined) as 0 = v v. For example, a two-dimensional zero vector is [ ] 0 0 =. 0

1.1 Linear algebra 1-6 After the introduction of linear combination, it is perhaps time to talk about what do we mean by linear algebra. What is this course Linear Algebra about? Algebra ( 代數 ) The part of mathematics in which letters and other general symbols are used to represent numbers and quantities in formulas and equations. Linear Algebra ( 線性代數 ) To combine these algebraic symbols (e.g., vectors) in a linear fashion. So, we will not combine these algebraic symbols in a nonlinear fashion in this course!

1.1 Graphical representation of vectors 1-7 [ ] 2 Vector v =, as its name reveals (a quantity with direction), can be graphically represented by an arrow (pointing out its direction) from (0, 0) to (2, 3). 3 [ ] 2 (2,3) 3 Sometimes, we will only mark the end point of the directional arrow since a vector can be uniquely identified by its end point. In notations, [ (2, ] 3) is the coordinate (or in Chinese, 坐標 ) in a 2-dimensional 2 plane, and is a vector. As an end point can be used to identify a vector, 3 these two representations will be used interchangeably.

1.1 Graphical representation of vector sum 1-8 Graphically, the[ vector ] sum v [ + w] is the new vertex of the parallelogram 2 1 formed by v = and w =. 3 1 new vertex Exercise: Canthelinear combinations of the above two vectors (i.e., the new vertex of the parallelogram) define all points in the 2-dimensional plane? Let s now proceed to Section 1.2!

1.2 Length of a vector and angle between two vectors 1-9 How to find the length of a vector v? Answer: By Pythagoras formula, v = v 2 1 + v2 2 if v = [ v1 v 2 ]. We can repeat using the Pythagoras formula to prove that v = v1 2 + v2 2 + v3 2 v 1 0 for a 3-dimensional vector by noting that v = v 2 + 0. 0 v 3 The length of a vector is sometimes referred to as its norm. How to find the angle between two vectors v and w? Answer: Using dot product or inner product as [ ] [ ] v1 w1 v w = = v 1 w 1 + v 2 w 2. v 2 w 2

1.2 Sine and cosine versus inner product 1-10 Since it is named dot product, the operator is reasonably denoted by a dot as v w. Since it is the sum of termwise products of inner components, it is also named inner product. Why can dot product orinner product be used to measure angleof two vectors? Before we answer the question, let us recall the usual trigonometric functions, sin() and cos(), for measurement of angles.

1.2 Sine and cosine versus inner product 1-11 sin(θ) 1 cos(θ) θ 1 θ cos(θ) sin(θ) cos(θ) θ θ cos(θ) sin(θ) 1 1 sin(θ)

1.2 Sine and cosine versus inner product 1-12 v w v v sin(θ) v w Pythagoras formula then gives that θ w v cos(θ) v cos(θ) w ( v sin(θ)) 2 +( w v cos(θ)) 2 = v w 2 v 2 + w 2 2 v w cos(θ) = v w 2 v w cos(θ) = v 2 + w 2 v w 2 2 = (v2 1 + v2)+(w 2 1 2 + w2) 2 [(v 1 w 1 ) 2 +(v 2 w 2 ) 2 ] 2 = v 1 w 1 + v 2 w 2 = v w cos(θ) = v w v w

1.2 Examples of inner product 1-13 Example. By cos(θ) = v w v w, if two vectors are perpendicular to each other, what will their inner product be? Example. Is v w = w v for real vector v and w? This is called commutative.

1.2 Length and inner product 1-14 The length of a vector v is the square root of its self inner product. v v = v 2 1 + v 2 2 = v 2 Hence, the inner product operation can be used to determine both length and angle of vectors. Example. Two vectors, v and w, are perpendicular if, and only if, v w =0. Furthermore, the positive or negative of v w tells whether the angle between the two vectors is below or above a right angle.

1.2 Length and inner product 1-15 After knowing that the length of a vector v can be represented by its inner product v = v v, the proof on Slide 1-12 can be generalized (to three or higher dimension) in the following way. For vectors v and w with dimension higher than 2, ( v sin(θ)) 2 +( w v cos(θ)) 2 = v w 2 v 2 + w 2 2 v w cos(θ) = v w 2 From v w 2 =(v w) (v w) = v v w v v w + w w = v 2 2v w + w 2 we know that cos(θ) = v w v w

1.2 Unit vector and standard unit vector 1-16 Definition (Unit vector): A unit vector is a vector with length equal to one. unit : a quantity chosen as a standard, in terms of which other quantities may be expressed, e.g., a unit of measurement. For example, unit circle, unit cube, unit vector,..., etc. Example. v/ v is a unit vector in the same direction as v. Definition (Standard unit vector): A standard unit vector is a unit vector with only one non-zero (positive) component. Example. i = [ ] 1 0 and j = [ ] 0 1

1.2 Theories regarding inner product 1-17 Angle between unit vectors. The inner product of two unit vectors is equal to the cosine of the angle between these two vectors. cos(θ) = v w v w = v w T rigonometric function. Prove that cos(β α) =cos(β)cos(α)+sin(β)sin(α) by means of inner product. Hint: v w =cos(β α). v = [ ] cos(β) sin(β) w = [ ] cos(α) sin(α) β α α

1.2 Theories regarding inner product 1-18 T riangle inequality. Prove that v + w v + w. Proof: From Slide 1-12, we obtain that when θ is the angle between v and w, v w 2 = v 2 + w 2 2 v w cos(θ). Similarly, since the angle between v and w is π θ, v + w 2 = v ( w) 2 = v 2 + w 2 2 v w cos(π θ) = v 2 + w 2 +2 v w cos(θ) v 2 + w 2 +2 v w = ( v + w ) 2. w v v+w v

1.2 Theories regarding inner product 1-19 Schwarz inequality. Prove that v w v w. Proof: v w v w = cos(θ) 1 After the introduction of inner product, is there an outer product? Hint: Inner product and outer product are respectively v w = w v = [ ] [ ] v w 1 w 1 2 = w v 1 v 1 + w 2 v 2 (order of v and w changed) 2 and [ ] [ ] v w = vw v1 [w1 ] v1 w = w v 2 = 1 v 1 w 2, 2 v 2 w 1 v 2 w 2 where superscript denotes the vector (matrix) transpose operation. (Notably, the product of two matrices will be introduced in Section 1.3.) In short, an inner product produces the sum of termwise products, while an outer product lists all termwise products in a matrix form.

1.2 Theories regarding inner product 1-20 Definition (Trace): The trace of a square matrix is the sum of the diagonals. By this definition, we observe that trace(v w) =v w. Definition (Vector transpose): The transpose of a vector is simply to reexpress a column vector into a row vector (and vice versa). In notations, peoplemayuse v or v T to denote the transpose. For complex vectors, we may also wish to complement the vector components in addition to transpose. In such case, v or v H [ ] 1+ı will be used instead. For example, = [ 1 ı 2 3ı ]. Basedonthis,the 2+3ı inner product of complex vectors will be defined as v w = w v. In Chinese, we would say 橫列 (row) 直行 (column). In Excel, we would use 橫列 (row) 直欄 (column).

1.3 Matrices 1-21 Definition (Matrix): A matrix is a rectangular array of numbers placed in rows and columns, and is treated as a single entity, which is manipulated according to particular rules. A matrix can be regarded as a collection of multiple (column) vectors, listed in a row. For example, A = [ u v w ] = u 1 v 1 w 1 u 2 v 2 w 2 u 3 v 3 w 3 3 3 or B = [ v w ] = v 1 w 1 v 2 w 2 v 3 w 3 A matrix can be regarded as a collection of multiple row vectors (i.e., transpose of column vectors), listed in a column. For example, x 1 u 1 v 1 w 1 y A = x 1 v 1 w 1 2 = u 2 v 2 w 2 or B = y x 2 = v 2 w 2 3 u 3 v 3 w 3 y 3 3 3 v 3 w 3 3 2 u 1 u 2 w 1 u 3 v 3 w 3 where x 1 = v 1, x 2 = v 2, x 3 =, w 1 w 2 [ ] v1 and y 1 = y 2 = [ v2 w 2 ], y 3 = [ v3 w 3 ]. 3 2

1.3 Matrices 1-22 Definition (Product of matrices): The product of two matrices is defined as the inner products of row vectors and column vectors respectively from the first matrix and the second matrix, counted from the left. Specifically as an example, [ ] [ ] [ ] a [b1 A 2 3 B 3 2 = 1 ] a b 2 1 b 1 a 1b 2 b1 a a 2 b 1 a 2 b = 1 b 2 a 1, 2 b 1 a 2 b 2 a 2 a 2 where for the two matrices A and B on the extreme left of the above equation, 2 is the number of vectors, and 3 is the dimension of the vectors. Note that the dimension of the vectors shall be the same, otherwise the inner product operation cannot be performed. In order to be consistent, a vector is always by convention a column vector. I will always add the transpose mark forarow vector. A vector can be treated as a matrix consisting of only one column. For example, v = Based on this, the product of two matrices can certainly be applicable to compute the product of a matrix and a vector. v 1 v 2 v 3. 2 2 2 2

1.3 Matrices 1-23 Definition (Product of a matrix and a vector): The product of a matrix and a vector is defined as the inner products of the row vectors from the matrix in the front and the vector. Specifically as an example, [ ] [ ] [ ] a A 2 3 x 3 1 = 1 a x 1 x x a1 a 2 x =, x a 2 a 2 where for the matrix A and the vector x on the extreme left of the above equation, 2 and 1 are the numbers of vectors, and 3 is the dimension of the vectors. Note that the dimension of the vectors shall be the same, otherwise the inner product operation cannot be performed. In its description from the textbook, Ax can be regarded as a dot product (i.e., inner product) with row (vectors of A), or a linear combination of column vectors of A with coefficients from x. For the second description, it can be understood as Ax = [ u v w ] x 1 x 2 = x 1 u + x 2 v + x 3 w. x 3 2 1 2 1

1.3 Matrices and linear equations 1-24 In what way the matrices can be useful? Answer: Solving linear equations. Example. Difference matrix A and sum matrix S. Definition (Difference matrix): Adifference matrix A produces the difference of adjacent components in x through Ax. In mathematics, a threeby-three difference matrix is given by A = -1 1 0 0-1 1 0 Ax = 0-1 1 1 0 0-1 1 0 0-1 1 x 1 x 2 x 3 = where there is an invisible 1 beyond the first row of A. x 1 x 2 x 1 x 3 x 2 Based on the above definition, we are wondering how to solve the linear equation x 1 b 1 Ax = x 2 x 1 = b 2 b? x 3 x 2 b 3

1.3 Matrices and linear equations 1-25 x 1 = b 1 1 0 0 b 1 Answer: x 2 (= b 2 + x 1 ) = b 2 + b 1 or equivalently, x = 1 1 0 b 2 x 3 (= b 3 + x 2 ) = b 3 + b 2 + b 1 1 1 1 b 3 Definition (Sum matrix): Asum matrix S produces the sum of components in b through Sb. In mathematics, a three-by-three sum matrix is given by S = 1 0 0 1 1 0 Sb = 1 1 1 1 0 0 1 1 0 1 1 1 b 1 b 2 b 3 = b 1 b 1 + b 2 b 1 + b 2 + b 3

1.3 Matrices and linear equations 1-26 In summary, Ax = b SAx = x = Sb. In terminology, we say that the sum matrix S is the left inverse matrix of the difference matrix A, since 1 0 0 SA = 0 1 0 = I Identity matrix. 0 0 1 As self-revealed by its name, an identity element i for some operation is one that x i = x for every x. Here, the matrix I is a multiplicative identity element for matrix multiplication since AI = A for every matrix A. End of example of difference matrix and sum matrix

1.3 Matrix and its inverse 1-27 Now we learn that the sum matrix S is the left inverse matrix of the difference matrix A because SA = I. Analogously, the difference matrix A is the right inverse matrix of the sum matrix S because SA = I. Now, let us ask ourselves a few questions, and try to answer them. Is the left inverse matrix of a matrix always equal to its right inverse matrix? Answer: Not necessarily. Why isn t the left inverse matrix always equal to the right inverse matrix? Answer: Because matrix multiplication is not commutative. In other words, AB = BA is not necessarily true! For example, the 2 3left inverse matrix of A 3 2 is usually not the right inverse of A 3 2.

1.3 Matrix and its inverse 1-28 Definition (Inverse matrix). Iftheleft inverse matrix of a matrix A is equal to its right inverse matrix, wesaythata is invertible, anditsunique inverse matrix is denoted by A 1. This notation is in parallel to the multiplicative inverse of a real number a is denoted by a 1 (if a 0). From the definition, we see AA 1 = A 1 A = I. Does the inverse matrix always exist? Example. A cyclic difference matrix C = 1 0 1 1 1 0 has no inverse. 0 1 1

1.3 Inverse and linear equations 1-29 Why is the inverse matrix useful in solving linear equations? Answer: Ax = b = x = A 1 b. If the left inverse matrix of A does not exist, then Ax = b either has no solution or has no unique solution.

1.3 Matrices and linear independence of vectors 1-30 Definition (Linear independence of vectors) A set of vectors u 1, u 2,..., u k is said to be linearly independent or simply independent if the matrix A = [ u 1 u 2 u k ] has left inverse. If A has left inverse S,thenAx = 0 has unique solution x = S(Ax) =S0 = 0. If u 1, u 2,...,u k are linearly independent, we cannot represent u k as a linear combination of u 1, u 2,...,u k 1. (In a sense, linear independence = linear-combination independence. ) If u 1, u 2,...,u k are linearly dependent, thenax = 0 has solutions other than x = 0. 1 0 1 Exercise. Show that 1 1 0 x = 0 has a non-zero solution. 0 1 1

1.3 Matrices and linear independence of vectors 1-31 In summary, there are four products (i.e., multiplications) in linear algebra. They are respectively denoted by adot for dot product or inner product, e.g., v w; a circled cross for outer product, e.g., v w; nothing for matrix product or scalar product, e.g., AB and vw; askewedcross for scalar product, e.g., v w.