Basic Linear Algebra in MATLAB

Similar documents
Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Linear Algebra (Review) Volker Tresp 2018

A primer on matrices

Numerical Methods Lecture 2 Simultaneous Equations

Matrices. Chapter Definitions and Notations

Image Registration Lecture 2: Vectors and Matrices

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra

Basic Concepts in Linear Algebra

Numerical Analysis Lecture Notes

Lecture 3: Matrix and Matrix Operations

Review of Basic Concepts in Linear Algebra

ICS 6N Computational Linear Algebra Matrix Algebra

Matrices BUSINESS MATHEMATICS

Linear Algebra March 16, 2019

Topic 15 Notes Jeremy Orloff

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Linear Algebra (Review) Volker Tresp 2017

Numerical Methods Lecture 2 Simultaneous Equations

Math 123, Week 2: Matrix Operations, Inverses

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Math 129: Linear Algebra

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

M. Matrices and Linear Algebra

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

MATH 320, WEEK 7: Matrices, Matrix Operations

A primer on matrices

Matrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch

Complex Numbers Year 12 QLD Maths C

Review from Bootcamp: Linear Algebra

TOPIC 2 Computer application for manipulating matrix using MATLAB

Lecture 10: Powers of Matrices, Difference Equations

Mathematics 13: Lecture 10

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

9. LECTURE 9. Objectives

CS 246 Review of Linear Algebra 01/17/19

I = i 0,

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Game Engineering: 2D

Chapter 2. Matrix Arithmetic. Chapter 2

Sometimes the domains X and Z will be the same, so this might be written:

Linear Algebra Review. Fei-Fei Li

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

This appendix provides a very basic introduction to linear algebra concepts.

1 Matrices and matrix algebra

1 Matrices and Systems of Linear Equations. a 1n a 2n

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Vector, Matrix, and Tensor Derivatives

Determinants - Uniqueness and Properties

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Phys 201. Matrices and Determinants

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 4. Matrix products

LESSON 7.2 FACTORING POLYNOMIALS II

Matrix-Vector Operations

Knowledge Discovery and Data Mining 1 (VO) ( )

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Think about systems of linear equations, their solutions, and how they might be represented with matrices.

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Math 320, spring 2011 before the first midterm

Matrix Arithmetic. (Multidimensional Math) Institute for Advanced Analytics. Shaina Race, PhD

SECTION 2: VECTORS AND MATRICES. ENGR 112 Introduction to Engineering Computing

Quantum Computing Lecture 2. Review of Linear Algebra

UMA Putnam Talk LINEAR ALGEBRA TRICKS FOR THE PUTNAM

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

1 Introduction. 2 Solving Linear Equations

Matrix Multiplication

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

Mathematics for Graphics and Vision

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 2

MAC Module 1 Systems of Linear Equations and Matrices I

Lecture 9: Elementary Matrices

Lecture 13: Simple Linear Regression in Matrix Format

Matrix-Vector Operations

Dot Products, Transposes, and Orthogonal Projections

STEP Support Programme. STEP 2 Matrices Topic Notes

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Linear Algebra and Eigenproblems

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

3 a 21 a a 2N. 3 a 21 a a 2M

Review of linear algebra

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Lecture 3: Special Matrices

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Computational Foundations of Cognitive Science. Addition and Scalar Multiplication. Matrix Multiplication

Linear Algebra V = T = ( 4 3 ).

Scientific Computing: Dense Linear Systems

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

22A-2 SUMMER 2014 LECTURE Agenda

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra and Matrix Inversion

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Matrix Basic Concepts

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Linear Equations and Matrix

Lab 2 Worksheet. Problems. Problem 1: Geometry and Linear Equations

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Transcription:

Basic Linear Algebra in MATLAB 9.29 Optional Lecture 2 In the last optional lecture we learned the the basic type in MATLAB is a matrix of double precision floating point numbers. You learned a number of different tools for initializing matrices and some basic functions that used them. This time, we ll make sure that we understand the basic algebraic operations that can be performed on matrices, and how we can use them to solve a set of linear equations. A Note on Notation The convention used in this lecture and in most linear algebra books is that an italics lower case letter (k) denotes a scalar, a bold lower case letter (x) denotes a vector, and a capital letter (A) denotes a matrix. Typically we name our MATLAB variables with a capital letter if they will be used as matrices, and lower case for scalars and vectors. Vector Algebra Remember that in MATLAB, a vector is simply a matrix with the size of one dimension equal to. We should distinguish between a row vector (a xn matrix) and a column vector (an nx matrix). Recall that we change a row vector x into a column vector using the transpose operator (x in MATLAB). The same trick works for changing a column vector into a row vector. We can add two vectors, x and y, together if they have the same dimensions. The resulting vector z = x + y is simply an element by element addition of the components of x and y: z i = x i + y i. From this is follows that vector addition is both commutative and associative, just like regular addition. MATLAB also allows you to add a scalar k (a x matrix) to a vector. The result of z = x + k is the element by element addition z i = k + x i. Vector multiplication can take a few different forms. First of all, if we multiply a scalar k times a vector x, the result is a vector with the same dimension as x: z = kx implies z i = kx i. There are two standard ways to multiply two vectors together: the inner product and the outer product. The inner product, sometimes called the dot product, is the result of multiplying a row vector times a column vector. The result is a scalar z = xy = i x iy i. To take the inner product of two column vectors, use z = x y. As we ll see, the orientation of the vectors matters because MATLAB treats vectors as matrices.

Unlike the inner product, the result of the outer product of two vectors is a matrix. In MATLAB, you get the outer product my multiplying a column vector times a row vector: Z = xy. The components of Z are Z ij = x i y j. To take the outer product of two column vectors, use Z = xy. Occassionally, what we really want to do is to multiply two vectors together element by element: z i = x i y i. MATLAB provides the.* command for this operation: z = x. y. To test our understanding, let s try some basic matlab commands: x = :5 y = 6:0 x+y x+5 5*x x*y x *y x.*y How would you initialize the following matrix in MATLAB using outer products? 2 3 4 5 2 3 4 5 2 3 4 5 2 3 4 5 2 3 4 5 2 Matrix Algebra The matrix operations are simply generalizations of the vector operations when the matrix has multiple rows and columns. You can think of a matrix as a set of row vectors or as a set of column vectors. Matrix addition works element by element, just like vector addition. It is defined for any two matrices of the same size. C = A + B implies that C ij = A ij + B ij. Once again, it is both commutative and associative. Scalar multiplication of matrices is also defined as it was with vectors. The result is a matrix: C = ka implies C ij = ka ij. If you multiply a matrix times a column vector, then the result is another column vector - the column of inner products: b = Ax implies b i = j A ij x j. Similarly, you can multiply a row vector times a matrix to get a row of inner products: b = xa implies b i = j A jix j. Notice that in both cases, the definitions require that the first variable must have the same number of columns as the the second variable has rows. This idea generalizes to multiplying two matrices together. For the multiplication C = AB, the matrix C is simply a collection of inner products: C ik = j A ij B jk. In this case, A must have the same number of columns as B has rows. Like ordinary multiplication, matrix multiplication is associative and distributive, but unlike ordinary multiplication, it is not commutative. In general, AB BA. Now we are in a position to better understand the matrix transpose. If B = A, then B ij = A ji. Think of this as flipping the matrix along the diagonal. This explains why 2

the transpose operator changes a row vector into a column vector and vice versa. The following identity holds for the definitions of multiplication and transpose: (AB) = B A. This help us to understand the difference between x A and Ax. Notice that for column vector x, (Ax) = x A. There are a few more matrix terms we should know. A square matrix is an nxn matrix (it has the same number of rows and columns). A diagonal matrix A has non-zero elements only along the diagonal (A ii ), and zeros everywhere else. You can initialize a diagonal matrix in MATLAB by passing a vector to the diag command. The identity matrix is a special diagonal matrix with all diagonal elements set to. You can initialize an idenitity matrix using the eye command. Try the following matlab commands: diag(:5) 3 Solving Linear Equations Let s take a step back for a moment, and try to solve the following set of linear equations: x + 3x 2 = 4 2x + 2x 2 = 9 With a little manipulation, we find that x = 4.75 and x 2 = 0.25. We could solve this set of equations because we had 2 equations and 2 unknowns. How should we solve a set of equations with 50 equations and 50 unknowns? Let s rewrite the previous expression in matrix form: [ ] [ ] [ ] 3 x 4 = 2 2 x 2 9 Notice that we could use the same form, Ax = b, for our set of 50 equations with 50 unknowns. As expected, MATLAB provides all of the tools that we need to solve this matrix formula, and it uses the idea of a matrix inverse. The inverse of a square matrix A, which is A in the textbooks and inv(a) in MATLAB, has the property that A A = AA = I. Using this, let s manipulate our previous equation: Ax = b A Ax = A b x = A b Now solve the original equations in MATLAB using inv(a) b. You should get the vector containing 4.75 and -0.25. There are a few things to remember about matrix inverses. First of all, they are only defined for square matrices. It works with the transpose and multiplication operations with the following identities: (A ) = (A ) and (AB) = B A. You should 3

be able to verify these properties on your own using the ideas we ve developed. But the most important thing to know about matrix inverses is that they don t always exist, even for square matrices. Using MATLAB, try taking the inverse of the following matrix: [ ] 2 A = 2 4 Now try inserting this A into the system of equations at the beginning of this section, and solving it using good old fashioned algebra. Why doesn t the inverse exist? 4 Quadratic Optimization We would like to solve the equation Ax = b even if A is not square. Let s seperate the problem into a few cases where the matrix A is an mxn matrix: If m < n, then we have more unknowns than equations. In general, this system will have infinitely many solutions. If m > n, then we have more equations than unknowns. In general, this system doesn t have any solution. What if we don t want to have matlab always return no solution, but we actually want the closest solution in the least squares sense? This is equivalent to minimizing the following quantity: E = ( A ij x j b i ) 2 2 i j MATLAB provides the backslash operator to accomplish the least squares fit for a matrix equation: x = A \ b. Type help slash to appreciate the power of this command. You ll see that we could have used this command to solve the square matrix equations, too. 5 Eigenmannia How does this relate to the fish data that we used in problem set? Recall that we were given a set of points (x i, y i ), and we were asked to find the coefficients a and b to fit the following linear model: y i a + bx i You can think of each point as an equation, and write the entire data set in matrix form: x y x 2 [ ] y 2 a =.. b. x m y m If you call the left matrix A and the right side b, then calling A \ b will ask MATLAB to solve for the values of a and b that minimize the least-squared error of the model. It will return exactly the same values for a and b that the polyfit command returns. 4

For this simple example, the polyfit command and the backslash command accomplished the same task. But what if you were given a set of points (x i, y i, z i ) and you were asked to fit the following linear model: z i a + bx i + cy i The matrix notation easily scales to this problem, but the polyfit function does not. 5