# Deep Learning Book Notes Chapter 2: Linear Algebra

Save this PDF as:
Size: px
Start display at page:

## Transcription

1 Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic typeface Vector Array of numbers Lowercase names in bold typeface Each element denoted in italic typeface with corresponding subscript Can be visualized as point, with each element giving the coordinate along a different axis Matrices 2-D array of numbers Uppercase names in bold typeface Each element denoted by name of matrix in italic typeface with corresponding indices Can be visualized as a collection of vectors. Vectors taken row-wise constitute the row space, whereas vectors taken column-wise constitute the column space. Tensors 3 or more dimensional array of numbers Transpose of a matrix is the mirror image of the matrix across the main diagonal Addition is done by adding corresponding elements. While implementing matrix operations, addition of a matrix and a vector can take place by adding the vector to each row or column of the matrix, and is known as broadcasting. 1

2 Section 2.2: Multiplying Matrices and Vectors Multiplication of a matrix by a vector can be visualized as transforming the vector according to the matrix (i.e. the matrix denotes a transformation). See the following 3Blue1Brown video for more details: D3MizzM2xVFitgF8hE_ab Also, for visualization and intuition one may refer to Matrices as Transformation by KhanAcademy: sformations/a/matrices-as-transformations Element-wise product of corresponding elements of two matrices is the Hadamard Product, denoted by A B. A system of linear equations can be denoted by matrices as A x = b. Here A R mxn is a known matrix, b R m is a known vector, and x R n is a vector of unknown variables. Section 2.3: Identity and Inverse Matrices Identity Matrix is a matrix that does not change the vector when multiplied to it. Inverse Matrix is a matrix that reverses the transformation applied by a matrix A, and is denoted by A -1. See the following 3Blue1Brown video for more details: DPD3MizzM2xVFitgF8hE_ab Given the inverse matrix A -1 we can find solutions for different b. However, the inverse matrix should not be used in software applications as there is a loss of precision. Hence, A -1 is calculated every time for a new b. 2

3 Section 2.4: Linear Dependence and Span Linear combination of vectors means a summation of vectors scaled by their respective scalars. A x = Σ x i A :,i If a vector v (n) from a set of vectors { v (1), v ( 2),, v ( n) } can be represented as a linear combination of the other vectors, then the set of vectors is linearly dependent. Elsewise, the set of vectors is linearly independent. The span of a set of vectors is the set of all points obtainable by a linear combination of the original vectors. For more details about span and linear combinations, refer to the following video by 3Blue1Brown: PD3MizzM2xVFitgF8hE_ab Adding a vector to a set that is a linear combination of other vectors in the set does not expand the span of the set of vectors. The system of linear equations can be thought of as a set of column vectors of A. Our objective is to find scalars corresponding to each vector so that they can reach a point specified by the vector b. A necessary condition so that each point b in the space R m has a solution is for n m. However, even if we have n column vectors to cover m dimensions, some of the vectors may be linearly dependent, causing the corresponding span to cover space less than R m. Hence a sufficient condition is for the n column vectors to have a set of exactly m linearly independent vectors. A square matrix having linearly dependent columns is known as singular. 3

4 Section 2.5: Norms L 2 norm is known as the Euclidean norm. It s common to use the squared L 2 norm as well. L 1 norm is known as the Manhattan distance, whose name comes from the way distance is calculated on the grid-like streets of Manhattan, New York. If sparsity (some values be exactly 0) or feature extraction is necessary, then L 1 norm is used. If the values are needed to be distributed among the features, then L 2 norm is used. Norms of n < 1 results in concave surfaces and hence are avoided. Norm of n = is the max function. A beautiful visualization of norm functions can be found in the following video: The Frobenius norm is used to measure the size of the matrix and is given by taking the L 2 norm of all the elements in the matrix. Section 2.6: Special Kinds of Matrices and Vectors Diagonal Matrices A matrix D is diagonal if and only if D i, j = 0, for all i j. It is not necessarily square. To compute (diag( v )) x, we only need to scale each element x i by v i, i.e. (diag( v )) x = v x. The inverse exists only if all the elements are non-zero. To compute the inverse, we only need to invert the individual elements, i.e. diag( v ) -1 = diag([1/ v 1,., 1/ v n ] T ). A symmetric matrix is any matrix that is equal to its own transpose, i.e A = A T. Orthogonal Matrix A vector x and a vector y are orthogonal to each other if x T y = 0. A vector that is orthogonal as well as having unit norm are called orthonormal. 4

5 Orthogonal Matrix is a square matrix whose rows are mutually orthonormal, and whose columns are mutually orthonormal. A T A = AA T = I A -1 = A T Therefore orthogonal matrices are of interest because their inverse is very cheap to calculate. Section 2.7: Eigendecomposition A vector v that does not change its direction after transformation by a matrix A is known as an eigenvector of matrix A. The factor λ by which the vector is scaled by the transformation is called the eigenvalue corresponding to that eigenvector. A v = λ v If v is an eigenvector of A, then so is any rescaled vector s v for s R, s 0. Moreover, s v has the same eigenvalue. eigenvectors. For this reason, we usually look for unit Eigendecomposition of A is given by A = V diag( λ ) V -1. It s visualization can be interpreted as taking the basis vectors (the vectors relative to which coordinates of a vector are given) to be the eigenvectors. Since during transformation, the eigenvectors scale by corresponding factor λ, any vector represented with the eigenvectors as its basis is scaled by a fixed number according to each basis vector (hence the diagonal matrix). V -1 helps to bring the vector back to the original basis vectors (usually Î and Ĵ ). For more details and good visualization, refer to the following video by 3Blue1Brown: PD3MizzM2xVFitgF8hE_ab Not every matrix can be decomposed into eigenvalues and eigenvectors. Sometimes the decomposition exists but contains complex numbers. Every real symmetric 5

6 matrix can be decomposed into an expression using only real-valued eigenvectors and eigenvalues: A = Q Λ Q T Where Q is an orthogonal matrix composed of eigenvectors of A, and Λ is a diagonal matrix. The eigenvalue Λ i, i is associated with the eigenvector in column i of Q, denoted as Q :, i. The eigendecomposition of any real symmetric matrix A may not be unique. If any two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue. Visualize this as a vector that can be broken down into components of eigenvectors having same eigenvalue, then that vector will be scaled by the same factor and remain at its place (e.g. let eigenvectors be Î and Ĵ, with their eigenvalue being 2. Then the whole space or span covered by them will be scaled by a factor of 2, with no change in the direction of any vector if Î and Ĵ are taken as the basis vectors). By convention, we usually sort the entries of Λ in descending order. A matrix is singular if any one of the eigenvalues is zero. A symmetric n x n real matrix M is said to be positive definite if all its eigenvalues are all positive. A matrix whose eigenvalues are all positive or zero values is called positive semidefinite. Similarly, the matrix is negative definite if all its eigenvalues are all negative, and if all its eigenvalues are negative or zero valued, it is negative semidefinite. Positive definite matrices guarantee that x, x T A x > 0. Positive definite matrices guarantee that x T A x = 0 x = 0. Proof : Consider a matrix A, which is Positive Definite. As a result, A is also a real symmetric matrix. This implies that all of its eigenvalues are real and its eigenvectors are orthonormal. As a result, the eigendecomposition can be written as: A = Q Λ Q T 6

7 Now, x T Ax x T Q Λ Q T x (Q T x) T Λ (Q T x) Now, Q T T T T x = [ q x x... x] T 1 q 2 q n, where q i are the eigenvectors. Note that Q T x is a column vector of dimension n x 1. Similarly, (Q T x) T is a row vector of dimension 1 x n. For simplicity of notation, let, Q T x = [ z... ] T 1 z 2 z n = z, where z i = T x are scalars. q i Hence, the expression becomes: z T Λ z As, Λ is a diagonal matrix of dimensions n x n, multiplication with it scales up each column of the preceding matrix by the corresponding diagonal element, i.e. col_1 becomes λ 1 col_1... col_n becomes λ n col_n. Thus, z T Λ = [ z z... z ] = y T (let) λ 1 1 λ 2 2 λ n n The expression then becomes: y T z = n i = 1 λ i z i 2 > 0 as all positive. > 0 (because A is Positive Definite) and squares of real numbers are λ i Hence, x T Ax > 0 7

8 Section 2.8: Singular Value Decomposition The SVD factorizes a matrix into singular vectors and singular values. Every real matrix has a SVD, but the same is not true for eigendecomposition. For example, if a matrix is not square, the eigendecomposition is not defined, but we can use a singular value decomposition instead. The singular value decomposition is defined as follows: A = UDV T Suppose A is an m X n matrix. U is m X m matrix and its column vectors are known as the left-singular vectors. V is n X n matrix and its column vectors are known as the right-singular vectors. D is m X n matrix and the elements along the diagonal are known as the singular values of matrix A. Matrix U and V are orthogonal matrices, and D is a diagonal matrix, not necessarily square. SVD visualization is similar to eigendecomposition, but applied to non-square matrices. The left-singular vectors of A are the eigenvectors of AA T. The right-singular vectors of A are the eigenvectors of A T A. The nonzero singular values of A are the square roots of the eigenvalues of AA T or A T A. It is used to partially calculate the inverse matrix of non-square matrices (Moore-Penrose Pseudoinverse). Miscellaneous Extra Thought: What is the difference between eigendecomposition and singular value decomposition? Answer in the link below. e-between-eigendecomposition-and-singular-valu 8

9 Section 2.9: The Moore-Penrose Pseudoinverse Practical algorithms for computing the pseudoinverse are based on the following formula A + = VD + U T A + is the pseudoinverse of matrix A. U, D and V are the singular value decomposition of A. Pseudoinverse D + of D is obtained by taking the reciprocal of its nonzero elements then taking the transpose of the resulting matrix. When A has more columns than rows, then solving a linear equation using the pseudoinverse provides one of the many possible solutions. Specifically, it provides the solution x = A + y with minimal Euclidean norm x 2 among all possible solutions. When A has more rows than columns, it is possible for there to be no solution (solution possible only if b is in span of column vectors of A ). In this case, using a pseudoinverse gives us the x for which A x is as close as possible to y in terms of Euclidean norm A x - y 2. Geometrically, this means projecting b onto the column space of A resulting in a vector p and finding x*, such that Ax* = p. Here, x* is the best possible solution to the equation. Section 2.10: The Trace Operator The trace operator gives the sum of all the diagonal entries of a matrix. The Frobenius norm of a matrix can be given as follows: A F = Tr( AA T ) The trace of a square matrix composed of many factors is also invariant to moving the last factor into the first position, if the shapes of the corresponding matrices allow the resulting product to be defined: Tr( ABC ) = Tr( CAB ) = Tr( BCA ) 9

10 A scalar is its own trace: a = Tr(a). If A is an n*n matrix, then the sum of the n eigenvalues of A is the trace of A. Proof of why Trace operation is cyclic: peratornametracebca-operatornam Section 2.11: Determinant If A is an n x n matrix, then the product of the n eigenvalues of A is the determinant of A. The absolute value of the determinant gives the scale factor by which area or volume (or a higher-dimensional analogue) is multiplied under the associated linear transformation, while its sign indicates whether the transformation preserves orientation. For more details, refer to the following video by 3Blue1Brown: D3MizzM2xVFitgF8hE_ab If determinant is 0, then space is contracted completely along at least one dimension, causing it to lose all its volume. If determinant is 1, then the transformation preserves volume. Section 2.12: Example: Principal Components Analysis Read from the book. Beautiful explanation given with a concrete proof. 10

### Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

### The study of linear algebra involves several types of mathematical objects:

Chapter 2 Linear Algebra Linear algebra is a branch of mathematics that is widely used throughout science and engineering. However, because linear algebra is a form of continuous rather than discrete mathematics,

### Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

### Linear Algebra - Part II

Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

### Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall

### Chapter 3 Transformations

Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

### Linear Algebra Review. Vectors

Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

### Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

### Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

### Review of Linear Algebra

Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

### Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

### Lecture II: Linear Algebra Revisited

Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function

### Singular Value Decomposition

Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

### This appendix provides a very basic introduction to linear algebra concepts.

APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

### Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

### Foundations of Computer Vision

Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

### UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

### 14 Singular Value Decomposition

14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

### Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

### Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

### Review of Linear Algebra

Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

### 7. Symmetric Matrices and Quadratic Forms

Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

### B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

### Eigenvalues and diagonalization

Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

### DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

### 15 Singular Value Decomposition

15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

### CS 246 Review of Linear Algebra 01/17/19

1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

### ECE 275A Homework #3 Solutions

ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

### The Singular Value Decomposition

The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

### Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

### Chapter 1. Matrix Algebra

ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

### Linear Algebra Review. Fei-Fei Li

Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

### 2. Review of Linear Algebra

2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

### Linear Algebra Review. Fei-Fei Li

Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

### Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

### Conceptual Questions for Review

Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

### Problem # Max points possible Actual score Total 120

FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

### MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

### Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

### Introduction to Matrix Algebra

Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

### We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

### Pseudoinverse & Moore-Penrose Conditions

ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

### Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

### Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

### Linear Algebra: Matrix Eigenvalue Problems

CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

### Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

### Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,

### Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

### The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

### [POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

### Econ Slides from Lecture 7

Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

### Matrix Algebra: Summary

May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

### M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

M.A.P. Matrix Algebra Procedures by Mary Donovan, Adrienne Copeland, & Patrick Curry This document provides an easy to follow background and review of basic matrix definitions and algebra. Because population

### Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

### 1. General Vector Spaces

1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

### Singular Value Decomposition

Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

### 1 Linearity and Linear Systems

Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

### Lecture 3: Review of Linear Algebra

ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

### Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

### TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

### Lecture 3: Review of Linear Algebra

ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

### Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

### Recall the convention that, for us, all vectors are column vectors.

Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

### Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

### Exercise Sheet 1.

Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

### Solutions to Review Problems for Chapter 6 ( ), 7.1

Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,

### Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

### Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

### Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

### Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

### Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

### j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

### 2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

### linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

### Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

### DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

### Basic Calculus Review

Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

### Seminar on Linear Algebra

Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

### Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

### (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

### Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

### Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

### Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

### MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

### LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

### Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

### LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

### Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

### Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

### (v, w) = arccos( < v, w >

MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

### Appendix A: Matrices

Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

### CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Linear Algebra 1/33 Vectors A vector is a magnitude and a direction Magnitude = v Direction Also known as norm, length Represented by unit vectors (vectors with a length of 1 that point along distinct

### Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

### 1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

### Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

### LEAST SQUARES SOLUTION TRICKS

LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We

### Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

### Elementary maths for GMT

Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1