We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Similar documents
22 APPENDIX 1: MATH FACTS

Review of Linear Algebra

Linear Algebra Review. Vectors

Chapter 3 Transformations

Review: Linear and Vector Algebra

Linear Algebra V = T = ( 4 3 ).

Quantum Computing Lecture 2. Review of Linear Algebra

LS.1 Review of Linear Algebra

Introduction to Matrix Algebra

M. Matrices and Linear Algebra

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Review problems for MA 54, Fall 2004.

Knowledge Discovery and Data Mining 1 (VO) ( )

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Linear Algebra (Review) Volker Tresp 2017

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra

Linear Algebra (Review) Volker Tresp 2018

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Lecture II: Linear Algebra Revisited

Linear Algebra. Min Yan

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

k is a product of elementary matrices.

Notes on basis changes and matrix diagonalization

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

Appendix A: Matrices

Lecture 2: Vector-Vector Operations

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Deep Learning Book Notes Chapter 2: Linear Algebra

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Image Registration Lecture 2: Vectors and Matrices

Math for ML: review. CS 1675 Introduction to ML. Administration. Lecture 2. Milos Hauskrecht 5329 Sennott Square, x4-8845

Linear Algebra Review. Fei-Fei Li

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Chapter 3: Theory Review: Solutions Math 308 F Spring 2015

Properties of Matrices and Operations on Matrices

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear Algebra Review

Linear Algebra: Matrix Eigenvalue Problems

Review of some mathematical tools

Math for ML: review. ML and knowledge of other fields

Lecture 7: Vectors and Matrices II Introduction to Matrices (See Sections, 3.3, 3.6, 3.7 and 3.9 in Boas)

CS 246 Review of Linear Algebra 01/17/19

Linear Algebra Massoud Malek

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

This appendix provides a very basic introduction to linear algebra concepts.

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

a11 a A = : a 21 a 22

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

The Singular Value Decomposition

Linear Algebra Review. Fei-Fei Li

18.02 Multivariable Calculus Fall 2007

Lecture 7. Econ August 18

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

FFTs in Graphics and Vision. The Laplace Operator

Matrices. Chapter Definitions and Notations

Linear Algebra, Vectors and Matrices

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Finite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

1 Matrices and Systems of Linear Equations. a 1n a 2n

Review of similarity transformation and Singular Value Decomposition

A Geometric Review of Linear Algebra

Review from Bootcamp: Linear Algebra

Elementary maths for GMT

Matrices and Vectors

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

CSCI 239 Discrete Structures of Computer Science Lab 6 Vectors and Matrices

Foundations of Matrix Analysis

Stat 159/259: Linear Algebra Notes

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Math 302 Outcome Statements Winter 2013

Matrix Algebra: Vectors

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Mathematical foundations - linear algebra

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

MATH2210 Notebook 2 Spring 2018

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Mathematics for Graphics and Vision

Introduction to Matrices

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Math for ML: review. Milos Hauskrecht 5329 Sennott Square, x people.cs.pitt.edu/~milos/courses/cs1675/

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Advanced Digital Signal Processing -Introduction

Linear Algebra Practice Problems

Transcription:

1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions 1 Vector addition is pointwise Graphically, addition is stringing the vectors together head to tail 2 Scalar multiplication is pointwise 112 Vector Magnitude The total length of a vector of dimension, its Euclidean norm, is given by this scalar is commonly used to normalize a vector to length one 113 Vector Dot Product

The dot product of two vectors is the sum of the products of the elements: The dot product also satisfies where is the angle between the vectors 114 Vector Cross Product The cross product of two three-dimensional vectors is another vector,, whose 1 direction is normal to the plane formed by the two vectors, 2 direction is given by the right-hand rule, rotating from to, 3 magnitude is the area of the parallelogram formed by the two vectors - the cross product of two parallel vectors is zero - and 4 (signed) magnitude is equal to, where is the angle between the two vectors, measured from to The schoolbook formula is 12 Matrices 121 Definition A matrix, or array, is equivalent to a set of row vectors, arranged side by side, say This matrix has three rows ( ) and two columns ( ); a vector is a special case of a matrix with one column Matrices, like vectors, permit pointwise addition and scalar multiplication We usually use an upper-case symbol to denote a matrix 122 Multiplying a Vector by a Matrix If denotes the element of matrix in the 'th row and the 'th column, then the multiplication is constructed as: where is the number of columns in will have as many columns as has rows ( ) Note that this multiplication is well-defined only if has as many rows as has columns; they have consistent inner dimension The product would be well-posed only if had one row, and the proper number of columns There is another important interpretation of this vector multiplication: Let the subscript indicate all rows, so that each is the 'th column vector Then We are multiplying column vectors of by the scalar elements of 123 Multiplying a Matrix by a Matrix The multiplication, so that is equivalent to a side-by-side arrangement of column vectors

where is the number of columns in matrix The same inner dimension condition applies as noted above: the number of columns in must equal the number of rows in Matrix multiplication is: 1 Associative 2 Distributive, 3 NOT Commutative, except in special cases 124 Common Matrices Identity The identity matrix is usually denoted, and comprises a square matrix with ones on the diagonal, and zeros elsewhere, eg, The identity always satisfies Diagonal Matrices A diagonal matrix is square, and has all zeros off the diagonal For instance, the following is a diagonal matrix: The product of a diagonal matrix with another diagonal matrix is diagonal, and in this case the operation is commutative 125 Transpose The transpose of a vector or matrix, indicated by a superscript results from simply swapping the row-column indices of each entry; it is equivalent to ``flipping'' the vector or matrix around the diagonal line For example, A very useful property of the transpose is 126 Determinant The determinant of a square matrix is a scalar equal to the volume of the parallelepiped enclosed by the constituent vectors The two-dimensional case is particularly easy to remember, and illustrates the principle of volume:

In higher dimensions, the determinant is more complicated to compute The general formula allows one to pick a row, perhaps the one containing the most zeros, and apply where is the determinant of the sub-matrix formed by neglecting the 'th row and the 'th column The formula is symmetric, in the sense that one could also target the 'th column: If the determinant of a matrix is zero, then the matrix is said to be singular - there is no volume, and this results from the fact that the constituent vectors do not span the matrix dimension For instance, in two dimensions, a singular matrix has the vectors colinear; in three dimensions, a singular matrix has all its vectors lying in a (two-dimensional) plane Note also that 127 Inverse If, then the matrix is said to be nonsingular The inverse of a square matrix, denoted, satisfies Its computation requires the determinant above, and the following definition of the adjoint matrix: Once this computation is made, the inverse follows from If is singular, ie,, then the inverse does not exist The inverse finds common application in solving systems of linear equations such as 128 Trace The trace of a matrix is simply the sum of the diagonals: 129 Eigenvalues and Eigenvectors A typical eigenvalue problem is stated as where is an matrix, is a column vector with elements, and is a scalar We ask for what nonzero vectors (right eigenvectors), and scalars (eigenvalues) will the equation be satisfied Since the above is equivalent to, it is clear that This observation leads to the solutions for ; here is an example for the two-dimensional case:

Thus, has two eigenvalues, and Each is associated with a right eigenvector In this example, Eigenvectors have arbitrary magnitude and sign; they are often normalized to have unity magnitude, and positive first element (as above) A set of eigenvectors is always linearly independent The condition that indicates that there is only one eigenvector for the eigenvalue If the left-hand side is less than this, then there are multiple unique eigenvectors that go with The above discussion relates only the right eigenvectors, generated from the equation Left eigenvectors, also useful for many problems, pertain to the transpose of : and share the same eigenvalues, since they share the same determinant Example: 1210 Modal Decomposition The right and left eigenvectors of a particular eigenvalue have unity dot product, that is, with the normalization noted above The dot product of a left eigenvector with the right eigenvector of a different eigenvalue is zero Thus, if then we have Next, construct a diagonal matrix of eigenvalues:

it follows that Hence can be written as a sum of modal components 1 1211 Singular Value Let be an, possibly complex matrix The singular value decomposition (SVD) computes three matrices satisfying where is, is, and is The star notation indicates a complex-conjugate transpose The matrix is diagonal, with the form where Each nonzero entry on the diagonal is a real, positive singular value, ordered such that The notation is common that, the maximum singular value, and, the minimum singular value The auxiliary matrices and are unitary, ie, they satisfy Like eigenvalues, the singular values of are related to projections represents the Euclidean size of the matrix along the 'th singular vector: Other properties of the singular value include: 13 Laplace Transform 131 Definition The Laplace transform converts time-domain signals into a frequency-domain equivalent The signal has transform defined as follows: where is an unspecified complex number; is considered to be complex as a result Note that the Laplace transform is linear, and so it is is distributive: hold throughout The following table gives a list of some useful transform pairs and other properties, for reference will

The last two properties are of special importance: for control system design, the differentiation of a signal is equivalent to multiplication of its Laplace transform by ; integration of a signal is equivalent to division by The other terms that arise will cancel if, or if is finite, so they are usually ignored 132 Convergence We note first that the value of affects the convergence of the integral For instance, if, then the integral converges only for, since the integrand is in this case Convergence issues are not a problem for evaluation of the Laplace transform, however, because of analytic continuation This result from complex analysis holds that if two complex functions are equal on some arc (or line) in the complex plane, then they are equivalent everywhere This fact allows us to always pick a value of for which the integral above converges, and then by extension infer the existence of the general transform 133 Convolution Theorem One of the main points of the Laplace transform is the ease of dealing with dynamic systems As with the Fourier transform, the convolution of two signals in the time domain corresponds with the multiplication of signals in the frequency domain Consider a system whose impulse response is, being driven by an input signal ; the output is The Convolution Theorem is

Here's the proof given by Siebert: When is the impulse response of a dynamic system, then represents the output of this system when it is driven by the external signal 134 Solution of Differential Equations by Laplace Transform The Convolution Theorem allows one to solve (linear time-invariant) differential equations in the following way: 1 Transform the system impulse response into, and the input signal into, using the transform pairs 2 Perform the multiplication in the Laplace domain to find 3 Ignoring the effects of pure time delays, break into partial fractions with no powers of greater than 2 in the denominator 4 Generate the time-domain response from the simple transform pairs Apply time delay as necessary Specific examples of this procedure are given in a later section on transfer functions 1 By carrying out successive multiplications, it can be shown that has its eigenvalues at, and keeps the same eigenvectors as