Designing Information Devices and Systems I Discussion 2A

Similar documents
There are five types of transformation that we will be dealing with in this section:

Designing Information Devices and Systems I Fall 2018 Homework 3

Span and Linear Independence

Fact: Every matrix transformation is a linear transformation, and vice versa.

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21

Math 51, Homework-2. Section numbers are from the course textbook.

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 3

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 3

1 Matrices and matrix algebra

Solutions to Homework 3, Introduction to Differential Equations, 3450: , Dr. Montero, Spring 2009 M = 3 3 Det(M) = = 24.

2018 Fall 2210Q Section 013 Midterm Exam I Solution

is Use at most six elementary row operations. (Partial

Designing Information Devices and Systems I Discussion 4B

Math 51, Homework-2 Solutions

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

Section 1.8/1.9. Linear Transformations

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34

Solutions to Exam I MATH 304, section 6

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Mathematics for Graphics and Vision

MATH 15a: Linear Algebra Exam 1, Solutions

(arrows denote positive direction)

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Spring 2015 Midterm 1 03/04/15 Lecturer: Jesse Gell-Redman

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Vectors Summary. can slide along the line of action. not restricted, defined by magnitude & direction but can be anywhere.

Chapter 1. Introduction to Vectors. Po-Ning Chen, Professor. Department of Electrical and Computer Engineering. National Chiao Tung University

Section 2.2: The Inverse of a Matrix

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 6

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

MATH 12 CLASS 4 NOTES, SEP

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 8

THE COMPOUND ANGLE IDENTITIES

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Designing Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Designing Information Devices and Systems I Spring 2017 Babak Ayazifar, Vladimir Stojanovic Homework 2

Announcements Wednesday, September 27

Notes on multivariable calculus

Lecture 3: Linear Algebra Review, Part II

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 5 Trigonometric Functions of Angles

(c)

CHAPTER 3 REVIEW QUESTIONS MATH 3034 Spring a 1 b 1

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Elementary Linear Algebra

Lecture Notes: Solving Linear Systems with Gauss Elimination

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

4 Elementary matrices, continued

88 CHAPTER 3. SYMMETRIES

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

EXAM 2 REVIEW DAVID SEAL

Notes: Vectors and Scalars

Chapter 1: Linear Equations

Topic 14 Notes Jeremy Orloff

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality

4 Elementary matrices, continued

Unit IV: Introduction to Vector Analysis

Chapter 1: Linear Equations

Arbitrary Rotated Coordinate Systems for the Inclined Plane as an Introduction to Group Theory in the Introductory Physics Classroom

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

Math 54 Homework 3 Solutions 9/

Mathematical Principles for Scientific Computing and Visualization

Exercise Sketch these lines and find their intersection.

Designing Information Devices and Systems I Spring 2016 Elad Alon, Babak Ayazifar Midterm 1

Solutions to Midterm 2 Practice Problems Written by Victoria Kala Last updated 11/10/2015

Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Homework 2. This homework is due September 14, 2015, at Noon.

Span & Linear Independence (Pop Quiz)

Section 4.5. Matrix Inverses

POLI270 - Linear Algebra

Homogeneous Transformations

Motion in Three Dimensions

Week #4: Midterm 1 Review

Section 8.2 Vector Angles

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2016 Babak Ayazifar, Vladimir Stojanovic Homework 2

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

REVIEW - Vectors. Vectors. Vector Algebra. Multiplication by a scalar

1.1 Single Variable Calculus versus Multivariable Calculus Rectangular Coordinate Systems... 4

Solving Linear Systems Using Gaussian Elimination

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Notes on Row Reduction

STEP Support Programme. Hints and Partial Solutions for Assignment 5

Algebra 1B notes and problems March 12, 2009 Factoring page 1

Lecture 13: Row and column spaces

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 10

Math 3191 Applied Linear Algebra

CHAPTER 1 Systems of Linear Equations

Chapter 3 Vectors. 3.1 Vector Analysis

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Worksheet 1.4: Geometry of the Dot and Cross Products

Rotation of Axes. By: OpenStaxCollege

Vectors a vector is a quantity that has both a magnitude (size) and a direction

Math 416, Spring 2010 Coordinate systems and Change of Basis February 16, 2010 COORDINATE SYSTEMS AND CHANGE OF BASIS. 1.

Transcription:

EECS 16A Spring 218 Designing Information Devices and Systems I Discussion 2A 1. Visualizing Matrices as Operations This problem is going to help you visualize matrices as operations. For example, when we multiply a vector by a rotation matrix, we will see it rotate in the true sense here. Similarly, when we multiply a vector by a reflection matrix, we will see it be reflected. The way we will see this is by applying the operation to all the vertices of a polygon and seeing how the polygon changes. Your TA will now show you how a unit square can be rotated, scaled, or reflected using matrices! Part 1: Rotation Matrices as Rotations (a) We are given matrices T 1 and T 2, and we are told that they will rotate the unit square by 15 and 3, respectively. Design a procedure to rotate the unit square by 45 using only T 1 and T 2, and plot the result in the IPython notebook. How would you rotate the square by 6? (b) Try to rotate the unit square by 6 using only one matrix. What does this matrix look like? (c) T 1, T 2, and the matrix you used in part (b) are called rotation matrices. They rotate any vector by an angle θ. Show that a rotation matrix has the following form: cosθ sinθ R sinθ cosθ where θ is the angle of rotation. (Hint: Use your trigonometric identities!) Let s try to derive this matrix using trigonometry. Suppose we want to rotate the vector 1 by θ. 1 (cosθ,sinθ) θ 1 (1,) We can use basic trigonometric relationships to see that sinθ rotating the vector by θ becomes : 1 cosθ 1 rotated by θ becomes cosθ. Similarly, sinθ EECS 16A, Spring 218, Discussion 2A, Last Updated: 218-1-29 22:1 1

1 (,1) ( sinθ,cosθ) θ 1 x We can also scale these pre-rotated vectors to any length we want, and, and we can observe y xcosθ ysinθ graphically that they rotate to and, respectively. Rotating a vector solely in the xsinθ ycosθ x-direction produces a vector with both x and y components, and, likewise, rotating a vector solely in the y-direction produces a vector with both x and y components. x Finally, if we want to rotate an arbitrary vector, we can combine what we derived above. Let y x and y be the x and y components after rotation. x has contributions from both x and y: x xcosθ) ysinθ. Similarly, y has contributions from both components as well: y xsinθ + ycosθ. Expressing this in matrix form: [ x y ] xcosθ ysinθ xsinθ + ycosθ Thus, we ve derived the 2-dimensional rotation matrix. Alternative solution: [ cosθ sinθ sinθ + cosθ ] x y The reason the matrix is called a rotation matrix is because it translates the unit vector cos(α + θ). sin(α + θ) Proof: cosθ sinθ cosα cosθ sinθ cosα + sinα sinθ cosθ sinα sinθ cosθ cosα cosθ sinα sinθ cosα sinθ + sinα cosθ cos(α + θ) sin(α + θ) cosα to give sinα (d) Now, we want to get back the original unit square from the rotated square in part (b). What matrix should we use to do this? Don t use inverses! Use a rotation matrix that rotates by 6. cos( 6 ) sin( 6 ) sin( 6 ) cos( 6 ) EECS 16A, Spring 218, Discussion 2A, Last Updated: 218-1-29 22:1 2

(e) Use part (d) to obtain the inverse rotation matrix for a matrix that rotates a vector by θ. Multiply the inverse rotation matrix with the rotation matrix and vice-versa. What do you get? The inverse matrix is as follows: cos( θ) sin( θ) cosθ sinθ sin( θ) cos( θ) sinθ cosθ We can see from this inverse matrix that the product of the roation matrix and its inverse is the identity matrix. cosθ sinθ cosθ sinθ 1 sinθ cosθ sinθ cosθ 1 Part 2: Commutativity of Operations A natural question to ask is the following: Does the order in which you apply these operations matter? Follow your TA to obtain the answers to the following questions! (a) Let s see what happens to the unit square when we rotate the matrix by 6 and then reflect it along the y-axis. (b) Now, let s see what happens to the unit square when we first reflect it along the y-axis and then rotate the matrix by 6. (c) Try to do steps (a) and (b) by multiplying the reflection and rotation matrices together (in the correct order for each case). What does this tell you? (d) If you reflected the unit square twice (along any pair of axes), do you think the order in which you applied the reflections would matter? Why/why not? It turns out that reflections are not commutative unless the two reflection axes are perpendicular to each other. For example, if you reflect about the x-axis and the y-axis, it is commutative. But if you reflect about the x-axis and x y, it is not commutative. EECS 16A, Spring 218, Discussion 2A, Last Updated: 218-1-29 22:1 3

2. Proofs (a) Suppose for some non-zero vector x, A x. Prove that the columns of A are linearly dependent. Begin by defining column vectors v 1... v n. A Thus, we can represent the multiplication A x as x x i v i Note that the equation above is the definition of linear dependence. That is, there exist coefficients, at least one which is non-zero, such that the sum of the vectors weighted by the coefficients is zero. These coefficients are the elements of the non-zero vector x. (b) Suppose there exists a matrix A whose columns are linearly dependent. Prove that if there exists a solution to A x b, then there are infinitely many solutions. What is the physical interpretation of this statement? Begin by defining column vectors v 1,..., v n. Recall the definition of linear dependence: A α i v i i,α i Note the constraint that not all of the weights α i can equal! (Equivalently, at least one of the weights must be non-zero.) This is extremely important overlooking this detail will make the proof incorrect. What does this imply? It implies that there exists some α such that A α, so that for any x, where A x b, then ( x + k α), k R, is also a valid solution. x b x + α b +. Therefore, if a solution x exists, infinite solutions must exist: x, A x b A( x + k α) b, k R. EECS 16A, Spring 218, Discussion 2A, Last Updated: 218-1-29 22:1 4

(c) Suppose we have an experiment where we have n measurements of linear combinations of n unknowns. We want to show that if at least one of the experiment s measurements can be predicted from the other measurements, then there will be either infinite or no solutions. Reword this statement into a proof problem and, as practice, complete the proof. Reworded: Prove that if an n n matrix A has rows that are linearly dependent, there will be either infinite or no solutions to A x b. Define row vectors v 1,..., v n. v 1 v 2 A. v n Recall the definition of linear dependence. Note that the summation is not over all i 1,...,n but only in the set where i j! This is extremely important. If this detail is overlooked, the proof will be incorrect. v j α i v i i j Therefore, we know we can row reduce this matrix to have a row of zeros (by definition of Gaussian elimination). v 1 v 2 x b. b n We know that if b n, then x has infinite solutions. If b n, then the system is inconsistent and has no solutions. (d) Practice Problem: Now suppose there exist two unique vectors x 1 and x 2 that both satisfy A x b, that is, A x 1 b and A x 2 b. Prove that the columns of A are linearly dependent. Let us consider the difference of the two equations: A x 1 A x 2 A( x 1 x 2 ) b b Once again, we ve reached the definition of linear dependence since x 1 x 2. (e) Challenging Practice Problem: Prove that for a m n matrix, the number of linearly independent vectors (both column and row) is at most min(m,n). The number of pivots can, at most, be equal to min(m,n), implying that col rank row rank. EECS 16A, Spring 218, Discussion 2A, Last Updated: 218-1-29 22:1 5