Linear algebra for computational statistics
|
|
- Jerome Robbins
- 5 years ago
- Views:
Transcription
1 University of Seoul May 3, 2018
2 Vector and Matrix
3 Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j the jth column vector of X. Denote the ith data(obervation or record) by x i (column vector). Thus, X = ( X 1 X 2 X p ) = x 1. x n.
4 product of matrix Let A be n p matrix, and C be p m matrix. The AC is n m matrix, and (AC) ij = p k=1 (A) ik(c) kj. transpose of matrix Transpose is an operation defined on matrix. We denote the transpose of A by A. Image of transpose of n p matrix is p n matrix with (A ) ij = A ji trace of matrix Trace is an operation defined on squared matrix. We denote the trace of A by tr(a). tr(a) = j (A) jj.
5 trace of matrix Let A be n p matrix, and C be p n matrix. Then tr(ac) = tr(ca) proof) tr(ac) = n i=1 (AC) ii = n p i=1 k=1 (A) ik(c) ki = p n k=1 i=1 (C) ki(a) ik = p k=1 (CA) kk = tr(ca) (example) Let x be p dimensional column vector (p 1 matrix), and let Σ be p p matrix. exp( x Σx) = exp( tr(x Σx)) exp( tr(x (Σx)) = exp( tr(σxx ))
6 Orthogonal matrix The orthogonal matrix is a square matrix whose column vectors are orthogonal.u is orthogonal if and only if U j U k = 0 and u i u j = 0 for j k and U j U j = 1 and u i u j = 1 for all j. In this case, U U = UU = I. That is, U = U 1.
7 Positive definite matrix Let A be p p matrix. If a Aa > 0 for all a R p, then we call A positive definite matrix. Nonnegative definite matrix If a Aa 0 for all a R p, then we call A nonnegative definite matrix. Let Σ = E(X µ) (X µ) be covariance matrix where µ is mean vector of X. For all a R p a Σa = Ea (X µ) (X µ)a = E((X µ)a) ((X µ)a) = E (X µ)a 2 0 Thus, every covariance matrix is nonnegative definite.
8 Transpose and projection Figure: Illustration of projection via transpose operation
9 Inner product and projection Let the ith normalized observation be z i = (x i µ) R p. For any a with a = 1, (z ja)a is a projection of z j on the spanned space by a. If (z ja) is zero, then z j is orthogonal to a. Recall that (X µ)a = (z 1 a,, z pa). If (X µ)a 2 = 0 for some a, then z ia = 0 for all i. That is, there exist a direction a such that all normalized data are orthogonal to the direction. Explain the positive definiteness of Σ using the above argument.
10 Determinant The determinant of A is the volume of parallelogram in R p derived of A: Figure: Geometrical meaning of determinant
11 Determinant If vectors in A are linearly dependent, the volume of parallelogram is zero, and vice versa. The following statement is equivalent. Let A be square matrix. det(a) 0; A is full rank; a 1,, a p, column vector of A, are linearly independent. Properties of determinant A, B are p p matrix. det(a) = det(a ) det(ca) = c p det(a) det(ab) = det(a)det(b) If A = diag(a 1,, a p ), then det(a) = p i=1 a i.
12 Product of matrix Let A be n p matrix and x = (x 1,, x p ) be p dimensional column vector. Then y = Ax is n dimensional vector. Denote the jth column vector of A by A j, then y = Ax = A 1 x A p x p R n, (1) which is a linear combination of column vectors of A. y is an element of column space of A, C(A) = { p A j x j : x j R, 1 j p} j=1 A is a linear filter (function) which takes p-dimensional input and gives n-dimensional output.
13 Null space of A: N (A) = {x R p : Ax = 0}
14 Example: Generalized Inverse Matrix Consider a linear system Ax = y, where A is n m matrix and y is n dimensional column vector. If A is invertible, then x = A 1 y. When, however, A is not invertible (singular or n m), how do we figure out the solution of the linear system?
15 Generalized inverse matrix Let ˆx = Gy where G is n m matrix. If AGA = A, then ˆx is a solution of the linear system. proof) Note that y C(A), that is, there exists z such that Az = y. Then, Aˆx = AGy = AGAz = Az = y, which completes the proof. The G is called of the generalized inverse matrix of A, and it may not be unique. But the Moore-Penrose pseudo-inverse matrix satisfying the additional conditions is unique (HW).
16 Eigen-decomposition Let A be p p symmetric and nonnegative definite matrix. Then A = EDE where E is an orthogonal matrix, called of eigen-matrix, whose columns are eigenvectors of A and D is a diagonal matrix whose diagonal elements are eigenvalues. Write E and D as follows: E = [e 1 : e 2 : : e p ] D = diag(λ 1,, λ p ), where λ j R is a eigenvalue and e j R p is the eigenvector corresponding to λ j. Note that A = p λ j e j e j. j=1
17 For an arbitrary x R p Ax = [e 1 : e 2 : : e p ]diag(λ 1,, λ p ) e 1. e p x e 1 x = [e 1 : e 2 : : e p ]diag(λ 1,, λ p ). e px λ 1 e 1 x = [e 1 : e 2 : : e p ]. λ p e px p p }{{} = e j (λ j e jx) = ( λ j e j e j)x, j=1 j=1 by (1) which implies A = p j=1 λ je j e j.
18 Let e be a unit vector in R p then e(e x) = (e x) e is a projection of }{{} R x onto C(e). Ax = p j=1 λ j e j (e jx) = }{{} R p λ j (e jx)e j The eigen-decomposition gives an interpretation of Linear mapping A that Ax is a linear combination of projected vector of x onto each eigen-space with scales of eigenvalues. j=1
19 Inverse matrix of positive definite matrix Let A be symmetric and nonnegative definite matrix. Then the minimum eigenvalue is positive if and only if A is positive definite. pf) Let λ min be the minumum eigenvalue of A. Assume that λ min > 0. Let x = j=1 a je j 0, then x Ax = p λ j (e jx) 2 = j=1 p λ j a 2 j > 0. j=1 Assume that A is pd matrix. WLOG, let λ p be the minimum eigenvalue of A. Then, e pae p = j=1 λ j (e je p ) 2 = λ p > 0.
20 Inverse matrix of positive definite matrix The inverse matrix of such A is given by A 1 = ED 1 E. pf) ED 1 E A = ED 1 }{{} E E DE = I and =I AED 1 E = ED }{{} E E D 1 E = I. By definition of inverse matrix, =I we obtain the result.
21 LU decomposition
22 Cholesky decomposition
23 QR decomposition Let A be n n squared matrix. Then, A = QR, where Q is orthogonal matrix, and R is upper triangular matrix. application Assume that A is invertible and consider a linear system Ax = b. Then, the system is written by Rx = Q b and the solution x is easily obtained by iterative computation from x 1 to x n.
24 Singular Value decomposition Let A be n p real valued matrix. Then, A = UDV where U is an n n orthogonal matrix, V is an p p orthogonal matrix and D is a n p diagonal matrix where (D) ij = 0 for i j and (D) ii 0 for 1 i min(n, p). AA = U DU ( D = DD is diagonal matrix) A A = V DV ( D = D D is diagonal matrix) Thus, U and V are eigen-matrix of AA and A A, respectively.
25 The singular value decomposition gives an insight for understanding of linear map. Let A be n p (n > p) matrix which is a linear map from R p and R n. D 11 v 1 x Ax = UDV x = U. D pp v px 0 n p A linear map A is interpreted as the compositions of projection map onto C(V) and scaling map and rotation map (scale invariant map).
26 Example: signal recovery Assume that D ii is sorted by descending order. Let C = {i : D ii < ɛ}, j = min C and let D be n p diagonal matrix with { D Dii if i / C ii = 0 otherwise. Define A = UD V then A is an approximation of A in the sense that Ax A x.
27 Ax A x = U j D j j v j x + + U pd pp v px p ɛ U j v jx (triangular inequality) j=j p = ɛ (v jx) U j (norm properties) j=j p ɛ x v j U j (cauchy inequality) j=j = ɛ(p j ) x (orthogonal vector)
28 Example: spectral norm Consider the problem to obtain y = max x =1 Ax. Let λ j be eigenvalue of A A and e j be the corresponding eigenvector to λ j. Let x = p j=1 a je j with p j=1 a2 j = 1, then Ax 2 = x A Ax = x ( = p j=1 p λ j e j e j)x j=1 λ j ( e jx ) (e jx) = }{{} =a j p j=1 λ j a 2 j (max j λ j ) p j=1 a 2 j }{{} =1 Without loss of generality, let max j λ j = λ 1 then the equality holds for x = e 1, which implies y = max j λ j (called of spectral norm of A).
29 Example: signal recovery II (A A ) (A A ) = (U(D D )V ) (U(D D )V ) = V(D D ) (D D ) V, which means that the largest eigenvalue of (A A ) (A A ) is less than ɛ 2. Hence, the spectral norm of (A A ) is less than ɛ. and we conclude that (A A )x ɛ x. Keeping mind of p (high dimensional case), compare the result with the previous example (HW).
30 Example: singular matrix Let A be symmetric p p nonnegative definite matrix. Then the following statements are equivalent: A is positive definite. the minimum eigenvalue of A is positive. A is invertible. det(a) 0 When det(a) = 0 then we call A is singular.
31 Let E be p p orthogonal matrix. Then, det(e) 2 = 1 since det(e) 2 = det(e) det(e ) = det(ee ) = det(i) = 1. By eigen-decomposition A = EDE, then det(a) = det(ede ) = det(e) det(d) det(e ) = Note that the geometric meaning of determinant is volume. If A is convaraiance matrix, then det(a) can be regarded as the volume of A. Moreover, each eigenvalue of λ j plays a role of the length of edge of p-dimensional retangle. p j=1 λ j
32 Linear map
33 Matrix and linear map Let V and W be vector spaces, and consider a linear map L from V to W. In particular, let V = R p and W = R n, then L(0) = 0, and L(ax + bx ) = alx + blx for all x, x R p and all a, b R. Thus, n p matrix can be regarded as a linear map. Moreover, we can consider one-to-one correspondence between linear map and matrix.
34 Matrix and linear map Matrix addition: let A and B be n p matrix, and denote the corresponding linear map by L A and L B. A + B is also n p matrix and L A+B be the correspondent linear map to A + B. Then, L A+B = L A + L B. Matrix multiplication: let A and B be n k and k p matrix, and denote the corresponding linear map by L A and L B. AB is n p matrix and L AB be the correspondent linear map to AB. Then, L AB = L A L B
35 Useful linear map Identity linear map: identity matrix Elementary operations: Let e j R p is a unit column vector where the jth element is 1 and π = (π 1,, π n ) where π j {1,, p} for j = 1,, n. Then, E π = (e π1,, e πn ) R n R p is a linear map that rearranges the elements according to π. E π = then E π x = (x 3, x 1, x 2 ) x = (x 1, x 2, x 3 ),
36 Useful linear map Elementary operations: Let n = p and E π = (0,, 0, e πk, 0,, 0) R n R p. What is this operation I + ae π with a R? Suppose that E π X is well defined, then what is the operational meaning of the E π? Suppose that XE π is well defined, the what is the operational meaning of E π?
37 Eigen Decomposition Let A be p p symmetric matrix, then A = p λ j e j e j, j=1 where e j is eigenvector of A. This eigen decomposition can be viewed as the decomposition of linear map: where E j = e j e j L A = p λ j L Ej, j=1
38 Linear combination linear combination of column vectors: Let v = (v 1,, v p ) be column vector and A be n p matrix. Denote A j be the jth column of A, then Then Av = p v j A j j=1 linear combination of row vectors: Let u = (u 1,, u n ) be row vector and denote A i be the ith row vector. Then, ua = n u i A i i=1
39 Let a = (a 1,, a n ) R n and x = (x 1,, x n ) R n. The figure below shows how to display {x R n : a x = b} on graph. First consider a set {x R n : a x = 0}. Unless a 0, we can choose x 0 R n satisfying a x 0 = b. Similarly, we can denote a as a perpendicular vector on the line.
40 Separating hyperplane theorem Let C and D be convex sets in R n with C D = φ. Then there exists a R n, a 0 and b R, such that a x b for all x C and a x b for all x D.
41 Let A be m n matrix, x = (x 1,, x n ) R n, and denote the ith column vector by a i R m. (Q1) Let Ax = b R m. Prove that b is a linear combination of a 1,, a n. (Q2) Consider a 1,, a n as points on R m. Then, explain the geometric meaning of A y 0 for y R m based on the separating hyperplane theorem.
42 Cone: A set K R n is a cone if x K αx K for any α 0 R. Conic hull: the conic hull of a set S is the set of all conic combinations of the points in S, { n } Cone(S) = α i x i : α i 0, x i S. n=1 i=1
43 (Farkas lemma) Let A R m n and b R m. Then exactly one of the following sets must be empty: (i) {x : Ax = b, x 0} (ii) {y : A y 0, b y > 0}
44 Illustration of Farkas lemma Figure: a j is column vector of A and the yellow region is conic hull of the column vectors.
Introduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLinear Algebra Formulas. Ben Lee
Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationCommon-Knowledge / Cheat Sheet
CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationBasic Elements of Linear Algebra
A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationLinear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016
Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationAnalysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103
Analysis and Linear Algebra Lectures 1-3 on the mathematical tools that will be used in C103 Set Notation A, B sets AcB union A1B intersection A\B the set of objects in A that are not in B N. Empty set
More informationLinear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.
Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationMATH 369 Linear Algebra
Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine
More informationStat 206: Linear algebra
Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More information1 9/5 Matrices, vectors, and their applications
1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationSome notes on Linear Algebra. Mark Schmidt September 10, 2009
Some notes on Linear Algebra Mark Schmidt September 10, 2009 References Linear Algebra and Its Applications. Strang, 1988. Practical Optimization. Gill, Murray, Wright, 1982. Matrix Computations. Golub
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationIntroduction. Vectors and Matrices. Vectors [1] Vectors [2]
Introduction Vectors and Matrices Dr. TGI Fernando 1 2 Data is frequently arranged in arrays, that is, sets whose elements are indexed by one or more subscripts. Vector - one dimensional array Matrix -
More informationSingular Value Decomposition and Principal Component Analysis (PCA) I
Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationA Review of Linear Algebra
A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationChapter 2:Determinants. Section 2.1: Determinants by cofactor expansion
Chapter 2:Determinants Section 2.1: Determinants by cofactor expansion [ ] a b Recall: The 2 2 matrix is invertible if ad bc 0. The c d ([ ]) a b function f = ad bc is called the determinant and it associates
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationA Quick Tour of Linear Algebra and Optimization for Machine Learning
A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More information4. Determinants.
4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationA Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag
A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationFoundations of Computer Vision
Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply
More informationMATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.
MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationLecture 1 and 2: Random Spanning Trees
Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More informationMATH 1210 Assignment 4 Solutions 16R-T1
MATH 1210 Assignment 4 Solutions 16R-T1 Attempt all questions and show all your work. Due November 13, 2015. 1. Prove using mathematical induction that for any n 2, and collection of n m m matrices A 1,
More informationThird Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.
Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot
More informationII. Determinant Functions
Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function
More informationThere are six more problems on the next two pages
Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationNumerical Linear Algebra
Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationMathematical Foundations of Applied Statistics: Matrix Algebra
Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.
More informationLinear Algebra Review
January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all
More informationj=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.
LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More information