This appendix provides a very basic introduction to linear algebra concepts.

Similar documents
Review of Linear Algebra

CS 246 Review of Linear Algebra 01/17/19

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2017

Quantum Computing Lecture 2. Review of Linear Algebra

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Image Registration Lecture 2: Vectors and Matrices

15 Singular Value Decomposition

14 Singular Value Decomposition

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Pre-sessional Mathematics for Big Data MSc Class 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra

Notation, Matrices, and Matrix Mathematics

Linear Algebra Review. Vectors

Linear Algebra V = T = ( 4 3 ).

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Math 123, Week 2: Matrix Operations, Inverses

Mathematical foundations - linear algebra

1 Linearity and Linear Systems

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Basic Concepts in Linear Algebra

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Matrix Algebra: Summary

Review of Basic Concepts in Linear Algebra

Mathematics for Graphics and Vision

Introduction to Machine Learning

The Singular Value Decomposition

Notes on Linear Algebra and Matrix Theory

Basic Linear Algebra in MATLAB

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Applied Linear Algebra in Geoscience Using MATLAB

Properties of Matrices and Operations on Matrices

CS 143 Linear Algebra Review

A primer on matrices

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

Introduction to Matrix Algebra

Eigenvalues and diagonalization

Mathematics 13: Lecture 10

CSCI 239 Discrete Structures of Computer Science Lab 6 Vectors and Matrices

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form

Example Linear Algebra Competency Test

Chap 3. Linear Algebra

Lecture 3: Matrix and Matrix Operations

Numerical Analysis Lecture Notes

Linear Algebra and Eigenproblems

STEP Support Programme. STEP 2 Matrices Topic Notes

Chapter 3 Transformations

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Matrix Algebra: Vectors

Review of linear algebra

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Matrix Basic Concepts

EE731 Lecture Notes: Matrix Computations for Signal Processing

Review: Linear and Vector Algebra

x n -2.5 Definition A list is a list of objects, where multiplicity is allowed, and order matters. For example, as lists

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Review of some mathematical tools

22A-2 SUMMER 2014 LECTURE Agenda

2. Matrix Algebra and Random Vectors

Linear Algebra March 16, 2019

Elementary maths for GMT

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Principal Components Theory Notes

The Gram Schmidt Process

The Gram Schmidt Process

B553 Lecture 5: Matrix Algebra Review

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Hebbian Learning II. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. July 20, 2017

Lecture 3 Linear Algebra Background

Matrices and Vectors

Notes on basis changes and matrix diagonalization

Linear Algebra Tutorial for Math3315/CSE3365 Daniel R. Reynolds

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Basic Concepts in Matrix Algebra

Introduction to Matrices

4 Linear Algebra Review

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

TOPIC 2 Computer application for manipulating matrix using MATLAB

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Algebra Massoud Malek

Linear Algebra, Vectors and Matrices

Exercise Set 7.2. Skills

1 Dirac Notation for Vector Spaces

A primer on matrices

Mathematical foundations - linear algebra

Linear Algebra: Matrix Eigenvalue Problems

Transcription:

APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not as general as it could be) form. Our goal is not to teach all intricacies of this very important field in mathematics, but to enable readers to understand the linear algebra notation and applications described in this book. A.1 SYSTEMS OF EQUATIONS Consider a very simple equation: a x b In this equation, x is a variable, and a and b are input data. To find the value of the variable x, we would simply write x b a Equivalently, we could have written x a 1 b Consider now a simple system of two linear equations: 3 x 1 + 8 x 2 46 10 x 1 7 x 2 15 The variables in this system are x 1 and x 2, and the input data consist of the coefficients 3, 8, 10, 7 and the constants to the right of the equal sign, 46 and 15. The way we would normally solve this system of equations is to Portfolio Construction and Analytics. Dessislava A. Pachamanova and Frank J. Fabozzi 2016 by Dessislava A. Pachamanova and Frank J. Fabozzi. Published by John Wiley & Sons, Inc. 541

542 PORTFOLIO CONSTRUCTION AND ANALYTICS express one of the variables through the other from one of the equations and plug into the other equation: x 2 46 3 x 1 8 10 x 1 7 46 3 x 1 15 8 Therefore, x 1 2 and x 2 5. It is convenient to introduce new array notation that allows us to treat systems of equations similarly to a single equation. Suppose we put together the coefficients in front of the variables x 1 and x 2 into a 2 2 array A, the constants to the right-hand side of the two equations into a 2 1 array b, and the variables themselves into a 2 1 array x. 1 We have 3 8 46 x1 A, b, and x 10 7 15 We would need to be careful in defining rules for array algebra so that, similarly to the case of solving a single equation, we can express an array of variables through the arrays for the inputs to the system of equations. Namely, we want to be able to write the system of equations as A x b and express the solution to the system of equations as x A 1 b This would substantially simplify the notation when dealing with arrays of data. A.2 VECTORS AND MATRICES Vectors and matrices are the terms used to describe arrays of data like the arrays A, b, and x in the previous section. Matrices can be arrays of any dimensions, for example, N M. The array A in the previous section was a matrix array. Vectors are matrices that have only one row or column, and are typically written as column arrays of dimensions N 1. You can imagine them as a listing of coordinates of a point in N-dimensional space. The b and the x arrays in the previous section were vector arrays. When an array x 2 1 Note that the first index counts the number of rows in the array, and the second index counts the number of columns in the array.

Basic Linear Algebra Concepts 543 consists of a single number, that is, it is of dimension 1 1, it is referred to as a scalar. Typically, vectors and matrices are denoted by bold letters in order to differentiate arrays from single elements. Vectors are usually denoted by bold small letters, while matrices are denoted by bold capital letters. An individual element of a vector or a matrix array is represented by a small nonbold letter that corresponds to the letter used to denote the array, followed by its row-and-column index in the array. The element in the ith row and the jth column of the matrix array A, for example, is denoted a ij. For the matrix A in Section A.1, the element in the first row and second column is a 12 8. Some important matrix arrays include the null matrix, 0, whose elements are all zeros, and the identity matrix, usually denoted I, which contains 1s in its left-to-right diagonal, and zeros everywhere else. It is referred to as the identify matrix because every other matrix multiplied by a matrix I of the appropriate dimensions equals itself. We will introduce matrix multiplication in the next section. Geometrically, vectors are represented as directed line segments or arrows. For example, the vector [2 5 can be thought of as the directed line segment connecting the origin (point (0,0) in space) to the point with coordinates (2,5). Vectors with more than two entries are directed line segments in more than two dimensions. The length of a vector (also referred to as the norm or the magnitude of a vector) can be calculated simply by calculating the Euclidean distance between its initial and end points. In this example, the length of the vector [2 5 would be 2 2 + 5 2 5.39 A matrix can be thought of as an operator it allows operations such as rescaling and rotations to be performed on a vector. A.3 MATRIX ALGEBRA Matrix algebra works differently from classical algebra, but after a little bit of getting used to, the definitions of array operations are logical. We list some common operations below. Matrix equality. Two matrices are equal only if their dimensions are the same and they have the same elements. Thus, for example, 000 000 000 00 00

544 PORTFOLIO CONSTRUCTION AND ANALYTICS Transpose. The transpose of an N M matrix with elements that are real numbers is an M N matrix whose elements are the same as the elements of the original matrix, but are swapped around the left-to-right diagonal. The transpose of a matrix A is denoted A T or A. For example, the transpose of the matrix A in Section A.1 is A 3 10 8 7 The transpose of a vector makes a column array a row array, and vice versa. For example, b 46 [46 15 15 Multiplication by a scalar. When a matrix is multiplied by a scalar (a single number), the resulting matrix is simply a matrix whose elements are all multiplied by that number. For example, 5 A 5 [ 3 8 10 7 [ 15 40 50 35 The notation A means ( 1) A, that is, a matrix whose elements are the negatives of the elements of the matrix A. Sum of matrix arrays. When two matrices are added, we simply add the corresponding elements. Note that this implies that the matrix arrays that are added have the same row and column dimensions. For example, the sum of two 2 3 matrices will be a 2 3 matrix as well: [ 123 456 + [ 7 8 9 10 11 12 [ 81012 14 16 18 Multiplication of matrix arrays. Matrix multiplication is perhaps the most confusing array operation to those who do not have a background in linear algebra. Let us consider again the example in Section A.1. We found that the values for the variables x 1 and x 2 that satisfy the system of equations are x 1 2 and x 2 5. Therefore, the vector of values for the variables is 2 x 5 Recall also that A 3 8, b 10 7 46 15

Basic Linear Algebra Concepts 545 and that we need A x b to be true for the system of equations if our matrix algebra is to work in a useful way. Let us compute the array product A x. Note that we cannot simply multiply A and x element-by-element, because A is of dimension 2 2 and x is of dimension 2 1. It is not clear which elements in the two arrays correspond to each other. The correct way to perform the multiplication is to multiply and add together the corresponding elements in the first row of A by the elements of x, and the corresponding elements in the second row of A by the elements of x: A x 3 8 10 7 2 3 2 + 8 5 5 10 2 7 5 46 b 15 In general, suppose that we want to multiply two matrices, P of dimensions N M and Q of dimensions M T. We have p 11 p 1M q 11 q 1T P Q p N1 p NM q M1 q MT N M M T M M p 1i q i1 p 1i q it i1 i1 M M p Ni q i1 p Ni q it i1 i1 N T In other words, the (i,j)th element of the product matrix P Q is obtained by multiplying element-wise and then adding the elements of the ith row of the first matrix (P) and the jth column of the second matrix (Q). Multiplications of more than two matrices can be carried out similarly, by performing a sequence of pairwise multiplications of matrices; however, note that the dimensions of the matrices in the multiplication need to agree. For example, it is not possible to multiply a matrix of dimensions N M and a matrix of dimensions T M. The number of columns in the first matrix must equal the number of rows in the second matrix. Similarly, in order to multiply more than two matrices, the number of columns in the second matrix must equal the number of rows in the third matrix, and so on. A product of an N M, anm T, and a T S matrix will result in a matrix of dimensions

546 PORTFOLIO CONSTRUCTION AND ANALYTICS N S. Thus, matrix multiplication is not equivalent to scalar multiplication in more ways than one. For example, it is not guaranteed to be commutative, that is, P Q Q P. It is possible to perform matrix multiplication in a way that is closer to standard arithmetic operations, that is, to multiply two matrices of the same dimensions so that each element in one matrix is multiplied by its corresponding element in the second matrix. However, using direct element-wise matrix multiplication is the special case rather than the default. Element-wise matrix multiplication is referred to as the Hadamard product, and is typically denoted by rather than. Matrix inverse. We would like to be able to find the vector x from the system of equations in Section A.1 in a way similar to the calculation of the value of the unknown variable from a single calculation. In other words, we would like to be able to compute x as x A 1 b This necessitates defining what A 1 (pronounced A inverse ) is. The inverse of a matrix A is simply the matrix that, when multiplied by the original matrix, produces an identity matrix. In other words, A A 1 I How to find A 1 is not as straightforward. Software packages such as MATLAB have special commands for these operations. Intuitively, the way to find the inverse is to solve a system of equations, where the elements of the inverse matrix are the variables, and the elements of A and I are the input data. It is important to note that not all matrices have inverses. However, some kinds of matrices, such as symmetric positive definite matrices, which are typically used in financial applications, always do. (See the definition of symmetric positive definite matrices in the next section.) A square matrix that has an inverse is called a nonsingular matrix. A.4 IMPORTANT DEFINITIONS Some special matrices are widely used in financial applications. Most often, practitioners are concerned with covariance and correlation matrices. Such matrices are special in that they are symmetric and, theoretically, need to be positive definite. Additionally, when building statistical models, one often uses data transformations and decomposition techniques that are easier to understand

Basic Linear Algebra Concepts 547 when presented in terms of vectors or matrices. The concepts of orthogonality, eigenvalues, and eigenvectors appear in many of the modeling techniques referenced in the book. We explain what these terms mean below. Symmetric matrix. A matrix A is symmetric if the elements below its left-to-right diagonal are mirror images of the elements above its left-to-right diagonal. A symmetric matrix is the same as its transpose, that is, A A. Covariance and correlation matrices are always symmetric. Positive definite and positive semi-definite matrices. The main idea behind defining a positive definite matrix is to create a definition of an array that shares some of the main properties of a positive real number. Namely, the idea is that if you multiply the equivalent of a square of a vector by it, you will obtain a positive quantity. If a matrix A is positive definite, then z Az > 0 for any vector z of appropriate dimensions. Similarly, a positive semi-definite matrix shares some properties of a nonnegative real number. Namely, if a matrix A is positive semi-definite, z Az 0 for any vector z of appropriate dimensions. Scalar (dot, inner) product. The scalar (dot or inner) product of two vectors u and v is the expression u v cos θ where θ is the angle between the two vectors and u, v are their magnitudes (lengths), u u u and v v v. Orthogonal and orthonormal vectors. Two vectors u and v are orthogonal if the angle between them is 90 ; that is, they are perpendicular to each other. Another way to state it is by saying that their scalar (dot) product is 0 (because cos 90 0). The vectors are orthonormal if they are orthogonal and their lengths are each 1. Orthogonality is important in situations in which we try to show that two variables are uncorrelated or independent. Orthogonal matrix. An orthogonal matrix A has orthonormal row and column vectors. In other words, AA A A I An orthogonal matrix always has an inverse, which is its transpose. (Because by definition if A A I, then A must be the same as A s inverse A 1.)

548 PORTFOLIO CONSTRUCTION AND ANALYTICS Eigenvectors and eigenvalues. An eigenvector v of a square matrix A is a vector that satisfies the following equality: Av λ v Multiplying a vector by a matrix is a linear transformation of that vector ( stretching, rotation, shrinking, etc.). An eigenvector is a vector that does not rotate under the transformation applied by A. It may only change its magnitude or point in the opposite direction. The value λ, called an eigenvalue, determines how much the magnitude of v changes. If λ > 1, the vector v is stretched ; if 0 < λ < 1, the vector v is shrunk ; if λ1, the vector v remains unchanged, and if λ < 0, the vector v reverses direction. If a square matrix A of dimension N N has N distinct eigenvectors (but not necessarily distinct eigenvalues), then it can be represented as A VDV 1 where D is a diagonal matrix formed from the eigenvalues of A, and the columns of V are the corresponding eigenvectors of A. This is called spectral decomposition or eigendecomposition. Spectral decomposition can always be performed for square symmetric matrices such as covariance and correlation matrices most used in portfolio applications.