Numerical Linear Algebra

Size: px
Start display at page:

Download "Numerical Linear Algebra"

Transcription

1 Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and j the columns. A denotes the transose of A. If B = A, then b ij = a ji. A vector x is a column vector, and x is a row vector.

2 Square Matrices The main diagonal of a square matrix A consists of elements a ii. Sub-diagonal elements are those below the main diagonal (or a ij such that i > j). Suer-diagonal elements are all a ij such that i < j. A is symmetric if a ij = a ji for all i and j. An uer triangular matrix has all sub-diagonal elements = 0. A lower triangular matrix has all suer-diagonal elements = 0. A diagonal matrix has all elements equal to 0 excet for the elements on the main diagonal. An identity matrix is a diagonal matrix I with 1 s on the main diagonal. A symmetric matrix A is ositive definite if x Ax > 0 for all x 0.

3 Matrix Oerations in R In R, a matrix is an array with two subscrits. However, there are utilities in R that aly only to matrices be careful about using the aroriate object tye when you want to work with matrices. > # create a matrix: > A=cbind(c(1,,3),c(4,5,6),c(7,8,9)) (1 (4 (7 > A [,1] [,] [,3] [1,] [,] 5 8 [3,] > is.matrix(a) [1] TRUE > # alternative ti way: > A=matrix(c(1:9),nrow=3,ncol=3) > A [,1] [,] [,3] [1,] [,] 5 8 [3,] 3 6 9

4 More Matrix Oerations in R > # get the transose: > t(a) [,1] [,] [,3] [1,] 1 3 [,] [3,] > # multily l matrices: > A%*%c(1,0,0) [,1] [1,] 1 [,] [3,] 3 > c(1,0,0)%*%a [,1] [,] [,3] [1,] > cbind(c(1,0,0))%*%a Error in cbind(c(1, 0, 0)) %*% A : non-conformable arguments > t(cbind(c(1,0,0)))%*%a [,1] [,] [,3] [1,] 1 4 7

5 More Matrix Oerations in R > # addition/subtraction > A-matrix(,nrow=3,ncol=3) [,1] [,] [,3] [1,] -1 5 [,] [3,] > # element-wise -- as oosed to matrix -- multilication li > A*matrix(,nrow=3,ncol=3) [,1] [,] [,3] [1,] 8 14 [,] [3,] > A%*%matrix(,nrow=3,ncol=3) [,1] [,] [,3] [1,] [,] [3,]

6 More Matrix Oerations in R > # symmetric matrices and eigenvalues: > > S=cbind(c(1,,3),c(,1,),c(3,,1)) (1 ( 1 ) (3 1)) > S [,1] [,] [,3] [1,] 1 3 [,] 1 [3,] 3 1 > eigen(s) $values [1] $vectors [,1] [,] [,3] [1,] e [,] e-16 [3,] e-01 >

7 Available Libraries in R See R_ext/Linack.h for details about the BLAS, LINPACK, and EINPACK libraries of FORTRAN subroutines. For some descrition of these three libraries, see htt:// F Wii RE i Th d ll From Writing R Extensions: These are exressed as calls to FORTRAN subroutines, and they will also be usable from users' FORTRAN code.

8 Solving Systems of Equations Consider solving the system Ax b for x, given A and b. In scalar terms, this involves solving j1 a ij x j b, i for x 1,,x. It s generally better to calculate A -1 b as the solution to Ax = b than to calculate A -1 directly and multily.

9 solve() in R > A=cbind(c(,1,),c(8,3,7),c(3,,4)) > A [,1] [,] [,3] [1,] 8 3 [,] 1 3 [3,] 7 4 > b=cbind(c(,5,8)) bi ( > # the solve() function actually uses the QR factorization: > x=solve(a,b) > x [,1] [1,] 3 [,] - [3,] 4 > A%*%x [,1] [1,] [,] 5 [3,] 8

10 solve() in R, continued > # to obtain the inverse of A: > Ai=solve(A) > Ai [,1] [,] [,3] [1,] 11-7 [,] 0-1 [3,] -1 - > # getting x directly, with the inverse of A (not a good idea): > x=ai%*%b > x [,1] [1,] 3 [,] - [3,] 4 >

11 Mathematics and Statistics Uer- and Lower-Triangular Matrices Uer and Lower Triangular Matrices For an uer triangular A, the system Ax = b can be written as: b x a x a x a b x a x a b x a This can be readily solved with backward substitution, starting with the last equation: 1 1, 1, 1 1 / ) ( / a x a b x a b x / ) ( a x a x a b x There is an analogous forward substitution algorithm for lower triangular systems.

12 Triangular Matrices in R > A [,1] [,] [,3] [1,] 1 3 [,] [3,] 0 0 > b=c(8,4,) > x=backsolve(a,b) > x [1]

13 Gaussian Elimination Recall that Gaussian elimination (GE) involves augmenting the matrix A with an additional column containing b, followed by these stes: 1) We first reduce the A ortion of this matrix to uer triangular form using elementary row oerations. ) We next work in reverse, starting from the last row and working our way u, reducing the A ortion to an identity matrix. What remains in the last column is the solution x. Try this for the system in the R solve() examle.

14 The LU Decomosition Note that Gaussian elimination can be viewed simly as the factoring of A into the roduct of a lower triangular matrix L and an uer triangular matrix U. The matrix U is the matrix left in the A ortion of the augmented matrix used for GE when A is reduced to uer triangular form. The sub-diagonal elements of the matrix L reresent the multiliers used at each stage of GE. The diagonal elements of L are all 1 s.

15 Comuting the LU Decomosition Exlicit formulas for the elements of U and L are given by u ij a ij i1 k 1 l ik u kj, i 1,..., j; l ij 1 u jj j1 a l u, i j 1,...,. ij k 1 ik kj Once L and U are comuted, we solve Ax = LUx = b first by using a forward substitution to solve for y in Ly = b, and then using a backward substitution to solve for y in Ux = y.

16 Advantages of the LU Aroach No additional comutations are needed (beyond what s required for GE). Solutions for any right-hand vector b can be comuted without redoing the GE; b is not needed when A is factored. LU yields other useful quantities; e.g., det(a) is the roduct of the diagonal elements of U, and each of the columns of A -1 can be comuted by taking b to be the corresonding column of the x identity matrix.

17 Vector Norms Vector and matrix norms lay an imortant role in error analysis. A norm tyically y measures in some sense the magnitude of an argument. For a real number, this is ordinarily the absolute value. For a vector x = (x 1,x,,x ), three common choices are the 1-norm, or L 1, defined by x 1 i11 the -norm, or L, defined by x i, x 1/ x, i 1 i and the -norm, or L, defined by x max xi i.

18 Matrix Norms To generalize these norms to matrices, a useful (but not unique) method is to define corresonding matrix norms from the vector norms through This yields A su Ax / x, for j 1,,,. j x0 j A 1 max aij, j j i A max aij A' 1, i and a value of A that is equal to the largest singular value of A. j

19 Condition Numbers The condition number of a square matrix A is defined to be ( A) A which is comuted as if A is singular. j j A 1 j, Some remarks: The lower bound of the condition number is 1. This yields a useful measure of how close a matrix is to singularity When solving a system Ax = b, it turns out that the relative error of the comuted solution is roortional to κ j (A).

20 Matrices and Linear Regression In statistical alications, we often run into roblems of the form y i x ij 1 j j i, where the y i are the resonses, the x ij are the covariates, the β j are the regression coefficients, and the ε i reresent the error terms. If the ε i can be assumed to be indeendent variables with 0 mean and a variance of σ, then we often use the least squares estimators for the β j.

21 The Least Squares Aroach The least square solution is the vector β = (β 1,, β ) that minimizes y X ( y X )'( y X ), where y = (y 1,, y n ), and X = (x ij ) with x i1 = 1 for all i (if an intercet term is included). Note that the solution ˆ ( ˆ ˆ 1,..., )' gives the vector of fitted values yˆ Xˆ X that is closest (in the Euclidean norm) to the actual resonses.

22 The Least Squares Solution An obvious way of obtaining the solution is to set the gradient y X 0, obtaining i the normal equations X ' X X ' y. This system can be solved using the methods described reviously. In articular, since X X is ositive definite it for full rank X, then the Choleski decomosition (a secial case of LU factorization for ositive definite matrices) can be very efficient.

23 Comutational Considerations We often want a variety of different models fit (e.g., stewise regression), so it d be good to have a fast method for udating the fitted model when covariates are added or droed. Along with the solution, we may also want other quantities such as Residuals Fitted values Regression and error sums of squares Diagnostic measures (e.g., diagonal elements of the rojection oerator X(X X) -1 X the so-called hat matrix )

24 Other Otions With the ractical considerations outlined on the slide revious, two very efficient techniques the QR decomosition and the singular value decomosition (SVD) involve decomosition of X directly. Advantages: It turns out that factoring X directly is a better conditioned roblem than factoring X X. QR or SVD allows us more easily to add and subtract covariates directly without a lot more additional work.

25 Rotations and Orthogonal Matrices A rotation in R is a linear transformation Q: R R such that Qx x, for all x in R. A rotation does not affect the length of vectors, but changes their orientation it can be thought of as a change in the coordinate axes, without a change in vector length.

26 Proerties of a Rotation Q From Qx = x for all x, it follows that x Q Qx = x x, so that x Q Q I x = 0 for all x. This is only true if Q Q = I, since Q Q I is symmetric. For square matrices only, Q Q = I imlies QQ = I. So Q = Q -1, and Q must be of full rank. Therefore, any x in R can be reresented by Qy for some y in R. When Q Q = I, the columns of Q are mutually orthogonal and each has unit length. For square matrices, either of these imlies the other a square matrix satisfying these roerties is said to be orthogonal. If Q is a rotation, then Q = 1. Since Q -1 = Q is also a rotation, then Q -1 is also 1, so that κ (Q) = 1. If Q 1 and Q are orthogonal matrices, then (Q 1 Q ) (Q 1 Q ) = Q Q 1 Q 1 Q = Q Q = I, so Q 1 Q is also orthogonal. Because of these characteristics any rotation is given by an orthogonal matrix Because of these characteristics, any rotation is given by an orthogonal matrix, and vice-versa.

27 Householder Transformations There are various ways of obtaining a rotation, such as a lane rotation ti (e.g., Jacobi or Givens rotations). ti Another family of rotations is referred to as the Householder transformations. It s of the form H I uu ', u' u where I is the identity matrix and u is any vector (of the roer length). By convention, H = I when u = 0. An imortant alication of Householder transformations is to transform matrices to uer triangular form.

28 Householder for a Single Vector Let x be an n-dimensional vector, and define u by u i 0, 0 i t, xt s, i t, x i, t i n, with s ) n 1/. j t j sign( x t x Then it can be shown that Hx = x u(u x)/(u u)=x u) x u, so that (Hx) i = x i for i < t, (Hx) i = 0 for i > t, and (Hx) t = s. (The sign of s is chosen so that x t and s will have the same sign.) Thus, the last n t comonents have been set to zero in the transformation Hx.

29 Householder for a Matrix We can erform a series of such transformations on the columns of a matrix in such a way as to leave the transformed matrix in uer triangular form. The transformation ti described d on the revious slide for x alied to another vector y yields Hy y u ( u ' y ) /( u ' u ), So that the first t 1 comonents of Hy are the same as y, and the other comonents are of the form y i fu i, where f y u / u. jt j j jt j

30 QR and Least Squares Recall the roblem of obtaining the least squares solution to y = Xβ. The motivation for the QR decomosition is that for any n x n orthogonal matrix Q Q' y Q' X y X, so that a β minimizing the former will also minimize the latter. Suose that we can find a Q such that Q' X 0 R ( n ), where R is uer triangular and 0 is a matrix of zeroes.

31 QR and Least Squares, continued Partition the Q described on the revious slide into Q = (Q 1, Q ), with Q 1 containing the first columns of Q and Q containing the other columns. Then Q' y Q' X Q 1' y R Q1 ' y R Q ' y Q ' y, so that Q 1 Rβ is minimized by ˆ R which is the least squares solution. 1 Q 1 ' y, 1

32 Obtaining Q for Least Squares We can obtain the transformation Q for X using the roduct of Householder transformations. For examle, if X j reresents the jth column of X, then one way of finding Q requires these stes: Let H 1 be the Householder transformation described reviously with x = X 1 and t = 1. Let X (1) X.ThenX(1) j be the jth column of H 1 X 1 has all elements excet for the first equal to 0. Next, let H be the Householder transformation with x = X (1) and t =, and let X () X.ThenX() j be the jth column of H H 1 X has all elements excet ossibly the first two equal to 0. Also, X () 1 = X (1) 1 ; that is, H did not change the first column, so now the first two columns of H H 1 X are in uer triangular form. Continuing, at the kth stage (k = 3,,)letH H k be the Householder transformation with x = X (k-1) k and t = k, and let X (k) j be the jth column of H k H 1 X. Then X (k) j = X (k-1) j for j < k, and the first k columns of the resulting matrix are in uer triangular form. After the th ste, the matrix H H 1 X has the form of Q X defined two slides revious.

33 Least Squares Quantities and QR To obtain the least squares estimates, we need Q 1 y, which can be comuted by alying the Householder transformations to y either during or after they are comuted for X. Then solve the uer triangular system Rβ = Q 1 y. (Note that once we ve comuted Q for X, we can aly it using different y s.) The error variance is given by ˆ y X /( n ) Q' y Q' X /( n ) /( n Recall that the diagonal elements of the hat matrix H = X (X X) X) -1 X are called the leverage values they rovide a diagnostic for identifying influential observations, or observations that have a relatively large effect on the estimates of the regression coefficients. Note that the ith diagonal element of H is given by h ii = x i (X X) 1 x i, where x i is the covariate vector of the ith observation. Since X = Q 1 R, then X X = R Q 1 Q 1 R = R R (note that Q 1 Q 1 = I x, but Q 1 Q 1 is not an identity matrix). Hence x i (X X)( ) -1 x i = x i (R R)( ) -1 x i = (R ) 1 x i. Q ' y ).

34 Singular Value Decomosition (SVD) This is regarded as the most stable means of solving linear systems. The SVD has the form X UDV ', Where X is an n x matrix with n >, U nx has orthonormal columns, D x is diagonal with d ii > 0, and V x is orthogonal. The d ii are called the singular values of X. Assume that d 11 d. (Note: this isn t the only form of the SVD. Another involves an orthogonal U nxn, and D nx where the n rows of zeroes are aended to the D defined above.)

35 Some Proerties of the SVD Since the columns of U are orthonormal, then U U = I x (although UU I nxn, unless n = ). Since X X = VDU UDV = VD V, then it follows that the columns of V are eigenvectors of X X and that the d ii are the corresonding eigenvalues. If X is a square x nonsingular matrix, then both U and V are orthogonal matrices, and X -1 = (V ) -1 D -1 U -1 = VD -1 U. So once the SVD is comuted, inverting the matrix X really only requires inverting a diagonal matrix. For a general n x matrix X,, with SVD UDV,, rank(x) = the number of nonzero d ii. A generalized inverse of X is any matrix G satisfying XGX = X. Let D + be the diagonal matrix with elements d ii+ = 1/d ii, if d ii > 0, and d ii+ = 0 if d ii = 0. Then a articular generalized inverse for X is given by X VD U '. This articular inverse is called the Moore-Penrose generalized inverse.

36 Comuting the SVD is somewhat comlicated. It involves finding orthogonal matrices U e and V such that the uer x block of U e XV is a diagonal matrix, with the rest of the matrix consisting of zeroes. We must roceed with alternating rows and columns, building Householder transformations U h XV h = B, where B is in bidiagonal form with nonzero elements b ii and b i,i+1, i = 1,,. We then use an iterative algorithm to find the singular values and transformations U b and V b such that U b B V b = D. Details are in Numerical Recies (either for C or Fortran) by Press et al.

37 SVD and Least Squares For our same least squares roblem, if rank(x) = and UDV is the SVD of X, then X X X= = VD V. The least squares solution then is ˆ 1 1 ( X ' X ) X ' y VD V ' VDU ' y VD U ' y. Once we have the SVD, finding the least squares solution involves alying the orthogonal transformations used for U to y, and inverting the diagonal matrix D, along with some additional matrix multilication.

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Participation Factors. However, it does not give the influence of each state on the mode.

Participation Factors. However, it does not give the influence of each state on the mode. Particiation Factors he mode shae, as indicated by the right eigenvector, gives the relative hase of each state in a articular mode. However, it does not give the influence of each state on the mode. We

More information

Introduction to MVC. least common denominator of all non-identical-zero minors of all order of G(s). Example: The minor of order 2: 1 2 ( s 1)

Introduction to MVC. least common denominator of all non-identical-zero minors of all order of G(s). Example: The minor of order 2: 1 2 ( s 1) Introduction to MVC Definition---Proerness and strictly roerness A system G(s) is roer if all its elements { gij ( s)} are roer, and strictly roer if all its elements are strictly roer. Definition---Causal

More information

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms

More information

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Chapter 10. Supplemental Text Material

Chapter 10. Supplemental Text Material Chater 1. Sulemental Tet Material S1-1. The Covariance Matri of the Regression Coefficients In Section 1-3 of the tetbook, we show that the least squares estimator of β in the linear regression model y=

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Introduction to Group Theory Note 1

Introduction to Group Theory Note 1 Introduction to Grou Theory Note July 7, 009 Contents INTRODUCTION. Examles OF Symmetry Grous in Physics................................. ELEMENT OF GROUP THEORY. De nition of Grou................................................

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Linear algebra for computational statistics

Linear algebra for computational statistics University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j

More information

1/25/2018 LINEAR INDEPENDENCE LINEAR INDEPENDENCE LINEAR INDEPENDENCE LINEAR INDEPENDENCE

1/25/2018 LINEAR INDEPENDENCE LINEAR INDEPENDENCE LINEAR INDEPENDENCE LINEAR INDEPENDENCE /25/28 Definition: An indexed set of vectors {v,, v } in R n is said to be linearly indeendent if the vector equation x v x v... x v 2 2 has only the trivial solution. The set {v,, v } is said to be linearly

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Sets of Real Numbers

Sets of Real Numbers Chater 4 Sets of Real Numbers 4. The Integers Z and their Proerties In our revious discussions about sets and functions the set of integers Z served as a key examle. Its ubiquitousness comes from the fact

More information

8.7 Associated and Non-associated Flow Rules

8.7 Associated and Non-associated Flow Rules 8.7 Associated and Non-associated Flow Rules Recall the Levy-Mises flow rule, Eqn. 8.4., d ds (8.7.) The lastic multilier can be determined from the hardening rule. Given the hardening rule one can more

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Numerical Linear Algebra SEAS Matlab Tutorial 2

Numerical Linear Algebra SEAS Matlab Tutorial 2 Linear System of Equations Numerical Linear Algebra SEAS Matlab utorial Linear system of equations. Given n linear equations in n unknowns. Matri notation: find such that A b. + + + - A, b + + 5 6 5 6

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Lecture 6: Geometry of OLS Estimation of Linear Regession

Lecture 6: Geometry of OLS Estimation of Linear Regession Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns

More information

ε i (E j )=δj i = 0, if i j, form a basis for V, called the dual basis to (E i ). Therefore, dim V =dim V.

ε i (E j )=δj i = 0, if i j, form a basis for V, called the dual basis to (E i ). Therefore, dim V =dim V. Covectors Definition. Let V be a finite-dimensional vector sace. A covector on V is real-valued linear functional on V, that is, a linear ma ω : V R. The sace of all covectors on V is itself a real vector

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Lecture 1.2 Pose in 2D and 3D. Thomas Opsahl

Lecture 1.2 Pose in 2D and 3D. Thomas Opsahl Lecture 1.2 Pose in 2D and 3D Thomas Osahl Motivation For the inhole camera, the corresondence between observed 3D oints in the world and 2D oints in the catured image is given by straight lines through

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

CMSC 425: Lecture 4 Geometry and Geometric Programming

CMSC 425: Lecture 4 Geometry and Geometric Programming CMSC 425: Lecture 4 Geometry and Geometric Programming Geometry for Game Programming and Grahics: For the next few lectures, we will discuss some of the basic elements of geometry. There are many areas

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

MATRICES AND MATRIX OPERATIONS

MATRICES AND MATRIX OPERATIONS SIZE OF THE MATRIX is defined by number of rows and columns in the matrix. For the matrix that have m rows and n columns we say the size of the matrix is m x n. If matrix have the same number of rows (n)

More information

Finite Mixture EFA in Mplus

Finite Mixture EFA in Mplus Finite Mixture EFA in Mlus November 16, 2007 In this document we describe the Mixture EFA model estimated in Mlus. Four tyes of deendent variables are ossible in this model: normally distributed, ordered

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment

More information

Chapter 13 Variable Selection and Model Building

Chapter 13 Variable Selection and Model Building Chater 3 Variable Selection and Model Building The comlete regsion analysis deends on the exlanatory variables ent in the model. It is understood in the regsion analysis that only correct and imortant

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21 Trimester 1 Pretest (Otional) Use as an additional acing tool to guide instruction. August 21 Beyond the Basic Facts In Trimester 1, Grade 7 focus on multilication. Daily Unit 1: The Number System Part

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010 ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11,

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Statics and dynamics: some elementary concepts

Statics and dynamics: some elementary concepts 1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and

More information

0.6 Factoring 73. As always, the reader is encouraged to multiply out (3

0.6 Factoring 73. As always, the reader is encouraged to multiply out (3 0.6 Factoring 7 5. The G.C.F. of the terms in 81 16t is just 1 so there is nothing of substance to factor out from both terms. With just a difference of two terms, we are limited to fitting this olynomial

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra By: David McQuilling; Jesus Caban Deng Li Jan.,31,006 CS51 Solving Linear Equations u + v = 8 4u + 9v = 1 A x b 4 9 u v = 8 1 Gaussian Elimination Start with the matrix representation

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Eigenvalues and diagonalization

Eigenvalues and diagonalization Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

More information

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21 Trimester 1 Pretest (Otional) Use as an additional acing tool to guide instruction. August 21 Beyond the Basic Facts In Trimester 1, Grade 8 focus on multilication. Daily Unit 1: Rational vs. Irrational

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

The analysis and representation of random signals

The analysis and representation of random signals The analysis and reresentation of random signals Bruno TOÉSNI Bruno.Torresani@cmi.univ-mrs.fr B. Torrésani LTP Université de Provence.1/30 Outline 1. andom signals Introduction The Karhunen-Loève Basis

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Hidden Predictors: A Factor Analysis Primer

Hidden Predictors: A Factor Analysis Primer Hidden Predictors: A Factor Analysis Primer Ryan C Sanchez Western Washington University Factor Analysis is a owerful statistical method in the modern research sychologist s toolbag When used roerly, factor

More information

Linear Algebra. James Je Heon Kim

Linear Algebra. James Je Heon Kim Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,

More information

New weighing matrices and orthogonal designs constructed using two sequences with zero autocorrelation function - a review

New weighing matrices and orthogonal designs constructed using two sequences with zero autocorrelation function - a review University of Wollongong Research Online Faculty of Informatics - Paers (Archive) Faculty of Engineering and Information Sciences 1999 New weighing matrices and orthogonal designs constructed using two

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Convex Optimization methods for Computing Channel Capacity

Convex Optimization methods for Computing Channel Capacity Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Graduate Mathematical Economics Lecture 1

Graduate Mathematical Economics Lecture 1 Graduate Mathematical Economics Lecture 1 Yu Ren WISE, Xiamen University September 23, 2012 Outline 1 2 Course Outline ematical techniques used in graduate level economics courses Mathematics for Economists

More information

TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES

TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES Khayyam J. Math. DOI:10.22034/kjm.2019.84207 TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES ISMAEL GARCÍA-BAYONA Communicated by A.M. Peralta Abstract. In this aer, we define two new Schur and

More information

lecture 3 and 4: algorithms for linear algebra

lecture 3 and 4: algorithms for linear algebra lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information