Things to Memorize: A Partial List. January 27, 2017

Similar documents
How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

Analytically, vectors will be represented by lowercase bold-face Latin letters, e.g. a, r, q.

2. VECTORS AND MATRICES IN 3 DIMENSIONS

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

VECTORS, TENSORS, AND MATRICES. 2 + Az. A vector A can be defined by its length A and the direction of a unit

set is not closed under matrix [ multiplication, ] and does not form a group.

Matrix Algebra. Matrix Addition, Scalar Multiplication and Transposition. Linear Algebra I 24

Lecture 2e Orthogonal Complement (pages )

Chapter 4 Contravariance, Covariance, and Spacetime Diagrams

Chapter 2. Vectors. 2.1 Vectors Scalars and Vectors

Vectors , (0,0). 5. A vector is commonly denoted by putting an arrow above its symbol, as in the picture above. Here are some 3-dimensional vectors:

On the diagram below the displacement is represented by the directed line segment OA.

September 13 Homework Solutions

Matrices and Determinants

MATRICES AND VECTORS SPACE

4 VECTORS. 4.0 Introduction. Objectives. Activity 1

Vectors. Introduction. Definition. The Two Components of a Vector. Vectors

Math 520 Final Exam Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

308K. 1 Section 3.2. Zelaya Eufemia. 1. Example 1: Multiplication of Matrices: X Y Z R S R S X Y Z. By associativity we have to choices:

Lecture Solution of a System of Linear Equation

1.2 What is a vector? (Section 2.2) Two properties (attributes) of a vector are and.

Review of Gaussian Quadrature method

S56 (5.3) Vectors.notebook January 29, 2016

Lecture 3. In this lecture, we will discuss algorithms for solving systems of linear equations.

HW3, Math 307. CSUF. Spring 2007.

LINEAR ALGEBRA APPLIED

Bases for Vector Spaces

Quadratic Forms. Quadratic Forms

Bridging the gap: GCSE AS Level

13.3 CLASSICAL STRAIGHTEDGE AND COMPASS CONSTRUCTIONS

9.4. The Vector Product. Introduction. Prerequisites. Learning Outcomes

Chapter 3 MATRIX. In this chapter: 3.1 MATRIX NOTATION AND TERMINOLOGY

Partial Derivatives. Limits. For a single variable function f (x), the limit lim

Mathematics. Area under Curve.

Rudimentary Matrix Algebra

Elementary Linear Algebra

Numerical Linear Algebra Assignment 008

Parse trees, ambiguity, and Chomsky normal form

1B40 Practical Skills

Here we study square linear systems and properties of their coefficient matrices as they relate to the solution set of the linear system.

Matrix Eigenvalues and Eigenvectors September 13, 2017

INTRODUCTION TO LINEAR ALGEBRA

Elementary Linear Algebra

Polynomials and Division Theory

Thomas Whitham Sixth Form

1 ELEMENTARY ALGEBRA and GEOMETRY READINESS DIAGNOSTIC TEST PRACTICE

DEFINITION The inner product of two functions f 1 and f 2 on an interval [a, b] is the number. ( f 1, f 2 ) b DEFINITION 11.1.

MATRIX DEFINITION A matrix is any doubly subscripted array of elements arranged in rows and columns.

dx dt dy = G(t, x, y), dt where the functions are defined on I Ω, and are locally Lipschitz w.r.t. variable (x, y) Ω.

Vector differentiation. Chapters 6, 7

Algebra Of Matrices & Determinants

1 Linear Least Squares

CONIC SECTIONS. Chapter 11

2.4 Linear Inequalities and Interval Notation

Practice final exam solutions

Introduction To Matrices MCV 4UI Assignment #1

ECON 331 Lecture Notes: Ch 4 and Ch 5

DEFINITION OF ASSOCIATIVE OR DIRECT PRODUCT AND ROTATION OF VECTORS

along the vector 5 a) Find the plane s coordinate after 1 hour. b) Find the plane s coordinate after 2 hours. c) Find the plane s coordinate

Improper Integrals. The First Fundamental Theorem of Calculus, as we ve discussed in class, goes as follows:

Chapter 1: Logarithmic functions and indices

Geometric Sequences. Geometric Sequence a sequence whose consecutive terms have a common ratio.

Best Approximation. Chapter The General Case

THREE-DIMENSIONAL KINEMATICS OF RIGID BODIES

First midterm topics Second midterm topics End of quarter topics. Math 3B Review. Steve. 18 March 2009

, MATHS H.O.D.: SUHAG R.KARIYA, BHOPAL, CONIC SECTION PART 8 OF

Linear Inequalities. Work Sheet 1

I1 = I2 I1 = I2 + I3 I1 + I2 = I3 + I4 I 3

Higher Checklist (Unit 3) Higher Checklist (Unit 3) Vectors

MATH34032: Green s Functions, Integral Equations and the Calculus of Variations 1

1. Review. t 2. t 1. v w = vw cos( ) where is the angle between v and w. The above leads to the Schwarz inequality: v w vw.

7.1 Integral as Net Change and 7.2 Areas in the Plane Calculus

SCHOOL OF ENGINEERING & BUILT ENVIRONMENT. Mathematics

STRAND J: TRANSFORMATIONS, VECTORS and MATRICES

CHAPTER 2d. MATRICES

Coordinate geometry and vectors

MATH 260 Final Exam April 30, 2013

Determinants Chapter 3

Linear Systems with Constant Coefficients

IMPOSSIBLE NAVIGATION

8. Complex Numbers. We can combine the real numbers with this new imaginary number to form the complex numbers.

Chapter 2. Determinants

Continuous Random Variables Class 5, Jeremy Orloff and Jonathan Bloom

A Matrix Algebra Primer

Theoretical foundations of Gaussian quadrature

Chapter 6 Techniques of Integration

378 Relations Solutions for Chapter 16. Section 16.1 Exercises. 3. Let A = {0,1,2,3,4,5}. Write out the relation R that expresses on A.

The area under the graph of f and above the x-axis between a and b is denoted by. f(x) dx. π O

Farey Fractions. Rickard Fernström. U.U.D.M. Project Report 2017:24. Department of Mathematics Uppsala University

11-755/ Machine Learning for Signal Processing. Algebra. Class Sep Instructor: Bhiksha Raj

CHAPTER 10 PARAMETRIC, VECTOR, AND POLAR FUNCTIONS. dy dx

Chapter 14. Matrix Representations of Linear Transformations

Lesson Notes: Week 40-Vectors

Section 14.3 Arc Length and Curvature

Computer Graphics (CS 4731) Lecture 7: Linear Algebra for Graphics (Points, Scalars, Vectors)

Basics of space and vectors. Points and distance. Vectors

To Do. Vectors. Motivation and Outline. Vector Addition. Cartesian Coordinates. Foundations of Computer Graphics (Spring 2010) x y

a a a a a a a a a a a a a a a a a a a a a a a a In this section, we introduce a general formula for computing determinants.

Matrices. Elementary Matrix Theory. Definition of a Matrix. Matrix Elements:

The Regulated and Riemann Integrals

Transcription:

Things to Memorize: A Prtil List Jnury 27, 2017 Chpter 2 Vectors - Bsic Fcts A vector hs mgnitude (lso clled size/length/norm) nd direction. It does not hve fixed position, so the sme vector cn e moved round s long s it doesn t grow/shrink or get tilted. If you multiply vector y positive sclr, the direction stys the sme, nd the length is scled. If you multiply vector y negtive sclr, the direction ecomes opposite (ut still prllel), nd the length is scled. We cll two vectors prllel if they re sclr multiples of ech other. We geometriclly dd two vectors together y plcing the hed of one t the til of the other, nd drwing the sum vector from the til of the first to the hed of the second. This is equivlent to tking prticulr digonl of the prllelogrm formed y the two vectors. Using the ove definition, we see tht the difference of two vectors cn e drwn geometriclly y tking the two vectors with their tils together, nd drwing vector etween their heds. R n refers to the spce of vectors with n coordintes. So, R 2 is ll vectors with two coordintes tht is, ll points in the xy-plne. Similrly, R 3 is ll vectors with three coordintes tht is, ll points in xyz-spce. A unit vector is vector with length 1. In R 2, i = [1, 0] nd j = [1, 0]. So, they re the unit vectors in the directions of the positive xes. Similrly, in R 3, i = [1, 0, 0], j = [0, 1, 0], nd k = [0, 0, 1]. The length of vector = [ 1, 2,..., n ] is Dot Product = 2 1 + 2 2 + 2 n The dot product comines two vectors (with the sme numer of coordintes) nd gives numer. (By contrst, the cross product comines two vectors in R 3 nd gives third vector in R 3.) To clculte the dot product of two vectors, you multiply together their corresponding coordintes, nd dd the products. For exmple: 8 1 3 10 2 5 = [ (8)( 1) + (3)(2) + (10)(5) + (3)(6) = 66 ] 3 6 1

Given two vectors nd, when we tke their dot product, it tells us something out the ngle θ etween the two of them: = cos θ It follows from the previous ullet point tht two non-zero vectors re orthogonl (or perpendiculr) if nd only if their dot product is zero. This is very common use of the dot product determining orthogonlity. 2 = Projections The projection of onto is the component of tht is prllel to. The picture is this: We drw line from the tip of to tht meets t right ngle. The point where it meets is the hed of proj, nd the til is the til of. So, the lue vector is the projection. We usully show these s cute ngles, ut they work for otuse lso. We relly only cre out the line in the direction of. If the ngle is otuse, the picture is like this: We cn clculte projection s follows: proj = ( ) 2 The dot product ehves lgericlly together with sclr multipliction nd vector ddition much the wy you would like it to. For exmple, ( + c) = + c. Determinnts of Smll Mtrices The determinnt of squre mtrix is numer. So the determinnt function hs squre mtrix s its input, nd numer s its output.

To clculte the determinnt of 2-y-2 mtrix: [ ] det = d c c d If two non-zero vectors in R 2 re prllel (tht is, sclr multiples of ech other) then when we put them s rows in mtrix, the determinnt of tht mtrix will e 0. To clculte the determinnt of 3-y-3 mtrix: c det d e f g h i [ e f = det h i ] det [ d f g i ] + c det [ ] d e g h Tht is: first, we tke the top row, nd lternte signs +,, +. Then, to decide which su-mtrix we ll tke the determinnt of, we delete the row nd column of the entry we chose. If nd re two vectors in R 2, then we cn clculte the re of the prllelogrm spnned y them (s in the picture elow) y mking them the rows of 2-y-2 mtrix nd tking the solute vlue of the determinnt. [ Are of prllelogrm = ] det Equivlently, the solute vlue of the determinnt is sin θ, where θ is the ngle etween nd. Cross Product The cross product comines two vectors in R 3 to give third vector in R 3. Unlike dot products nd determinnts, we won t define cross product for vectors except those with three coordintes. = ( ); other properties of the cross product re given on Pge 31 of your text i j k = det gives the re of the prllelogrm spnned y nd. Tht is, = sin θ, where θ is the ngle etween nd. So, the length of the cross product is the re of the prllelogrm spnned y the two vectors.

Are of prllelogrm = Note: contrst this to the re of the prllelogrm formed y two vectors in R 2. The vector is orthogonl to oth nd. However, we need to e more precise thn this, ecuse there re two directions tht re orthogonl two oth vectors. Its direction (up or down) is given y the right-hnd rule. If,, nd c re three vectors in R 3, then we cn clculte the volume of the prllelepiped spnned y them y mking them the rows of 3-y-3 mtrix nd tking the solute vlue of the determinnt. c Volume of prllelepiped = det c The determinnt ove is lso the triple product of,, nd c, which cn lso e clculted s ( c) or ( ) c. Lines nd Plnes The prmetric eqution of line (in ny dimension: R 2, R 3, etc) is x = q + s Tht is, the line consists of ll points x tht cn e written s some sclr multiple of plus the constnt vector q. The vector gives the direction of the line, nd the vector q is point on the line. The component eqution of line in R 2 is x + y = c, where,, nd c re constnt sclrs. The vector [, ] is orthogonl to the line.

The prmetric eqution of plne (in ny dimension: R 3, R 4, etc) is x = q + s + t where nd re not prllel. Tht is, the line consists of ll points x tht cn e written s liner comintion of nd plus the constnt vector q. The vectors nd lie in the plne; tht is, if we position them to their til is point on the plne, then their hed is point on the plne s well. (We lso cll vector tht lies in the plne prllel to the plne.) The constnt vector q is point on the plne. The component eqution of plne in R 3 is x + y + cz = d, where the vector [,, c] is norml to the plne. Tht is, ny vector prllel to the plne is orthogonl to [,, c]. If d = 0, then the plne psses through the origin. The constnt d will determine the position of the plne in spce, while the coefficients,, c, determine the plne s tilt. Note tht for every point (x, y, z) in the plne, [x, y, z] [,, c] = d. The component eqution of line in R 3 is s the intersection of two plnes: 1 x + 2 y + 3 z = 4 1 z + 2 y + 3 z = 4 The vectors [ 1, 2, 3 ] nd [ 1, 2, 3 ] re orthogonl to the line. To get direction vector of the line, you cn clculte the cross product of these two vectors. Liner Systems Let 1 x + 2 y = 3 nd 1 x + 2 y = 3 e lines tht re not prllel. Then their intersection is point. So, the solution to the following system of liner equtions is single point. 1 x + 2 y = 3 1 z + 2 y = 3 Let 1 x + 2 y + 3 z = 4, 1 x + 2 y + 3 z = 4, nd c 2 x + c 2 y + c 3 z = c 4 e plnes in R 3 whose norml vectors do not ll lie on the sme plne. (Tht is, their norml vectors re linerly independent.) Then their intersection is point. So, the solution to the following system of liner equtions is single point. 1 x + 2 y + 3 z = 4 1 z + 2 y + 3 z = 4 c 1 z + c 2 y + c 3 z = c 4 If three vectors lie on the sme plne, then the prllelepiped they define is smooshed ll the wy flt, nd so hs volume zero, nd hence determinnt zero. So, the system of liner equtions in the ullet point ove hs precisely one solution if nd only if det 1 2 3 1 2 3 0 c 1 c 2 c 3 In the cse tht the determinnt is zero, the solutions to the system of liner equtions could form line (tht is, they cn e descried using one prmeter), they could form plne (tht is, it tkes two prmeters to descrie them), they could e ll of R 3 (in the silly cse tht every eqution is 0x + 0y + 0z = 0), or there could e no solutions t ll.

Liner Independence A liner comintion of vectors is vector formed y dding sclr multiples of the given vectors. For exmple, the vector 2 5 + πc is liner comintion of the vectors,, nd c. The set of ll liner comintions of collection of vectors is clled the spn of the vectors. A collection of vectors is linerly independent if the only liner comintion of them to equl the zero vector is the liner comintion where ll sclrs re zero. Tht is: consider collection of vectors 1, s,..., n. Set up the system of liner equtions s 1 1 + s 2 2 + + s n n = 0. If the only solution is s 1 = s 2 = = s n = 0, then the vectors re linerly independent. If there re other solutions, then the vectors re linerly dependent. The ove is the forml definition of liner independence. For collection of t lest two vectors, it is equivlent to the following definition: collection of vectors is linerly independent if no vector in the collection cn e written s liner comintion of the others. A collection of n linerly independent vectors from R n is clled sis of R n. Suppose you hve collection of vectors tht form sis of R n. Then ny vector in R n cn e written s liner comintion of your vectors, nd there is unique wy to do it. Chpter 3 Solving Liner Systems You should know how to solve liner system using sustitution. We cn use three row opertions on liner system without chnging its solutions: 1. Multiplying n eqution (row) y non-zero sclr 2. Adding multiple of one row to nother row 3. Swpping the verticl order of rows To sve time nd spce, we use shorthnd for systems of liner equtions. We use n ugmented mtrix, y which we men we tke mtrix (representing the coefficients of the vriles) nd ugment it y dding n extr column seprted y verticl line (the column represents the constnts in the equtions). You should know how to use Gussin Elimintion (pivoting) to solve system of liner equtions. You should recognize when mtrix is in row echelon form nd reduced row echelon form. (See the pictures on pge 79 in the text.) The rnk of mtrix (without its ugmenttion) is the numer of nonzero rows tht it hs fter eing reduced. You should know how to express the solutions to system of liner equtions using prmeters. The numer of prmeters needed to descrie the solutions to system of liner equtions with rnk r nd n vriles is n r, provided there re ny solutions t ll.

If mtrix hs n rows nd columns, nd rnk n, then its reduced row echelon form will hve 1s long the min digonl nd 0s everywhere else. 1 0 0 0 1 0 0 0 1 A system of liner equtions tht looks like this will hve precisely one solution, regrdless of wht its constnts re. If mtrix hs n columns, rnk n, nd t lest n rows, then its reduced row echelon form will hve 1s long the min digonl nd 0s everywhere else. 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 A system of liner equtions tht looks like this will hve either precisely one solution or no solutions t ll. Homogeneous Systems A system of liner equtions is homogeneous if the constnts re ll zero. For instnce, the system on the left is homogeneous, while the system on the right is not. 1 x + 2 y = 0 1 z + 2 y = 0 1 x + 2 y = 0 1 z + 2 y = 1 A homogeneous system lwys hs t lest one solution (where x = y = z = = 0) So, the solutions of liner eqution cn lwys e written s s 1 1 + s 2 2 + + s k k (for some pproprite k, nd possily the vectors re ll 0). Any liner comintion of solutions of A 0 is itself solution of A 0. If you hve system of liner equtions, you cn form the ssocited homogeneous system y simply chnging ll the constnts to 0. This is different system, with different solutions, ut the solutions re relted in predictle mnner. Suppose A is system of liner equtions, nd A 0 is its ssocited homogeneous system. Let s 1 1 + s 2 2 + +s k k e the complete set of solutions to A 0. It s possile tht A hs no solutions. If it hs t lest one solution q, then the complete set of solutions to A cn e written s q+s 1 1 +s 2 2 + +s k k. There re lots of relted fcts: for instnce, the difference of ny two solutions of A is solution of A 0, nd the sum of ny solution of A with ny solution of A 0 is solution to A.