EECS 275 Matrix Computation
|
|
- Nelson Wilcox
- 5 years ago
- Views:
Transcription
1 EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA Lecture 9 1 / 23
2 Overview Least squares minimization Regression Regularization 2 / 23
3 Reading Chapter 11 of Numerical Linear Algebra by Llyod Trefethen and David Bau Chapter 5 of Matrix Computations by Gene Golub and Charles Van Loan Chapter 4 of Matrix Analysis and Applied Linear Algebra by Carl Meyer Chapter 11 and Chapter 15 of Matrix Algebra From a Statistician s Perspective by David Harville 3 / 23
4 Matrix differentiation First order differentiation of linear form: Likewise x a x a x = x a = i = a x x = x i = x a = ( a i x i x a x 1. x a x n { 1 if i = j 0 if i j i a ix i ) = i = a a i x i = a j x a x = a (Ax) x = A (Ax) = A x 4 / 23
5 Matrix differentiation (cont d) First order differentiation of quadratic form: x Ax = i,k a ik x i x k x Ax = i,k a ikx i x k x Ax = (A + A )x x 2x j if i = k = j (x i x k ) x = i if k = j, i j x k if i = j, k j 0 otherwise = (a jj x 2 j + i j a ij x i x j + k j a jkx j x k + i j,k j a ikx i x k ) = a jj x 2 j + i j a ij (x i x j ) + k j a jk (x j x k ) + i j,k j a ik (x i x k ) = 2a jj x j + i j a ijx i + k j a jkx k + 0 = i a ijx i + k a jkx k 5 / 23
6 Matrix differentiation (cont d) First order differentiation of quadratic form: x Ax x = (A + A )x Let W be a symmetric matrix, it can be easily shown that s (x As) W (x As) = 2A W (x As) s (x s) W (x s) = 2W (x s) x (x As) W (x As) = 2W (x As) 6 / 23
7 Matrix differentiation (cont d) Second order derivative of quadratic form: 2 (x Ax) x s = i a ij x s + k a jk x k x s = a sj + a js Recall 2 (x Ax) x x = A + A f (x) f (a) + J(a)(x a) (x a) H(a)(x a) See The Matrix Cookbook by Kaare Petersen and Michael Pedersen ( for details 7 / 23
8 Overdetermined linear equations Consider y = Ax where A IR m n is skinny, i.e., m > n One can approximately solve y Ax, and define residual or error r = Ax y Find x = x ls that minimizes r x ls is the least squares solution Geometric interpretation: Ax ls is the point in ran(a) that is closest to y, i.e., Ax ls is the projection of y onto ran(a) 8 / 23
9 Least squares minimization Minimize norm of residual squared r = Ax y r 2 = x A Ax 2y Ax + y y Set gradient with respect to x to zero x r 2 = 2A Ax 2A y = 0 A Ax = A y (also known as normal equations) Assume A A is invertible, we have x ls = (A A) 1 A y Ax ls = A(A A) 1 A y 9 / 23
10 Least squares minimization y = Ax x ls = (A A) 1 A y x ls is linear function of y x ls = A 1 y if A is square x ls solves y = Ax ls if y ran(a) A = (A A) 1 A is called pseudo inverse or Moore-Penrose inverse A is a left inverse of (full rank, skinny) A: A A = (A A) 1 A A = I A(A A) 1 A is the projection matrix 10 / 23
11 Orthogonality principle Optimal residual r = Ax ls y = (A(A A) 1 A I )y which is orthogonal to ran(a): r, Az = y (A(A A) 1 A I ) Az = 0 for all z IR n Since r = Ax ls y A(x x ls ) for any x ran(a), we have Ax y 2 = (Ax ls y) + A(x x ls ) 2 = Ax ls y 2 + A(x x ls ) 2 which means for x x ls, Ax y > Ax ls y Can be further simplified via QR decomposition 11 / 23
12 Least squares minimization and orthogonal projection Recall if u IR m, then P = uu is an orthogonal projection u u Given a point x = x + x, its projection is P u x = uu x + uu x = x Generalize to orthogonal projections on a subspace spanned by a set of orthonormal basis A = [u 1,..., u r ] P A = AA In general, we need a normalization term for orthogonal projection if u 1,..., u r is not orthonormal basis, P A = A(A A) 1 A Given A = UΣV, it follows that P A = UU by least squares minimization 12 / 23
13 Least squares estimation Numerous applications in inversion, estimation and reconstruction problems have the form y = Ax + v x is what we want to estimate or reconstruct y is our sensor measurements v is unknown noise or measurement error i-th row of A characterizes i-th sensor Least squares estimation: choose ˆx that minimizes Aˆx y, i.e., deviation between what we actually observe y, and what we would observe if x = ˆx, and there were no noise (v = 0) least squares estimate is ˆx = (A A) 1 A y 13 / 23
14 Best linear unbiased estimator (BLUE) Linear estimator with noise: y = Ax + v with A is a full rank and skinny A linear estimator of form ˆx = By, is unbiased if ˆx = x whenever v = 0 (no estimator error when v = 0) Equivalent to BA = I, i.e., B is the left inverse of A Estimator error of unbiased linear estimator is x ˆx = x B(Ax + v) = Bv It follows that A = (A A) 1 A is the smallest left inverse of A such that for any B with BA = I, we have Bij 2 i,j i,j A 2 ij i.e., least squares provides the best linear unbiased estimator (BLUE) 14 / 23
15 Pseudo inverse via regularization For µ > 0, let x µ be unique minimizer of [ ] [ Ax y 2 + µ x 2 = A y µi x 0 thus x µ = (Ã Ã) 1 Ã ỹ = (A A + µi ) 1 A y ] 2 = Ãx ỹ 2 is called regularized least squares solution for Ax y Also called Tikhonov (Tychonov) regularization (ridge regression in statistics) As A A + µi > 0 and so is invertible, then we have and lim x µ = A y µ 0 lim µ 0 (A A + µi ) 1 A = A 15 / 23
16 Minimizing weighted-sum objective Two (or more) objectives: want J 1 = Ax y 2 small and also J 2 = F x g 2 small Consider minimize a weighted-sum objective [ Ax y 2 +µ F x g 2 = Thus, the least squares solution is ] [ A x µf y µg ] 2 = x = (Ã Ã) 1 Ã ỹ = (A A + µf F ) 1 (A y + µf g) Ãx ỹ 2 Widely used function approximation, regression, optimization, image processing, computer vision, control, machine learning, graph theory, etc. 16 / 23
17 Least squares data fitting Linear regression: Model one scalar y in terms of linear combination of t 1,..., t n n+1 y = α 0 + α 1 t α n t n = α i t j where α j are unknown parameters or coefficients For a set of m data points, {(t i, y i )}, t IR n, want to minimize m n+1 (y i t ij α j ) 2 i=1 j=1 j=1 17 / 23
18 Least squares data fitting For a set of training data, {(t i, y i )}, we form y and A In matrix form, denote A by m (n + 1) matrix with each row an input vector, and x IR n+1, y 1 1 t 11 t t 1n α 0 y = Ax y = y 2 A = 1 t 21 t t 2n 1 x = α 1. y m 1 t m1 t m2... t mn α n and thus we obtain the coefficients α i from x, where x = A y = (A A) 1 A y and n+1 y = α 0 + α 1 t α n t n = α i t j j=1 18 / 23
19 Least squares data fitting (cont d) Estimate the relationship of weight loss (y) and storage time (t 1 ) and storage temperature (t 2 ) with y = α 0 + α 1 t 1 + α 2 t 2 Time Temp Loss Least squares solution is found by A = x = α 0 α 1 y = α 2 Using MATALB: x = A\y = [ ] y = t t / 23
20 Least squares polynomial fitting Fit polynomial of degree n 1, n m with data (y i, t i ) y = p(t) = α 0 + α 1 t + α 2 t α n 1 t n 1 Basis functions are f j (t) = t j 1, j = 1,..., n (using geometric progression) Straight line: p(t) = α 0 + α 1 t 1 Quadratic: p(t) = αo + α 1 t 1 + α 2 t 2 2 Cubic, quartic, and higher polynomials 20 / 23
21 Least squares polynomial fitting Matrix A has form A ij = t j 1 i y 1 1 t 1 t1 2 t1 n 1 1 t 2 t2 2 t n 1 y = y 2 y m A = (called a Vandermonde matrix) 1 t m tm 2 tm n 1 See also kernel regression and splines 2 x = α 0 α 1 α n 1 21 / 23
22 Least squares polynomial fitting (cont d) Estimate the relationship between range of height of a missile Position Height A = y = f (t) = t t 2 22 / 23
23 Applications Thin plate spline: model/morph non-rigid motion 23 / 23
Lecture 5 Least-squares
EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 16 1 / 21 Overview
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationLinear Least Square Problems Dr.-Ing. Sudchai Boonto
Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkuts Unniversity of Technology Thonburi Thailand Linear Least-Squares Problems Given y, measurement signal, find
More informationLecture 6. Regularized least-squares and minimum-norm methods 6 1
Regularized least-squares and minimum-norm methods 6 1 Lecture 6 Regularized least-squares and minimum-norm methods EE263 Autumn 2004 multi-objective least-squares regularized least-squares nonlinear least-squares
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More information1 Cricket chirps: an example
Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationLeast-squares data fitting
EE263 Autumn 2015 S. Boyd and S. Lall Least-squares data fitting 1 Least-squares data fitting we are given: functions f 1,..., f n : S R, called regressors or basis functions data or measurements (s i,
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationOrthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016
Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationLeast squares: the big idea
Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationMath 2331 Linear Algebra
6. Orthogonal Projections Math 2 Linear Algebra 6. Orthogonal Projections Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2 Jiwen He, University of
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationSolutions to Review Problems for Chapter 6 ( ), 7.1
Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationPseudoinverse and Adjoint Operators
ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationEE263 homework 3 solutions
EE263 Prof. S. Boyd EE263 homework 3 solutions 2.17 Gradient of some common functions. Recall that the gradient of a differentiable function f : R n R, at a point x R n, is defined as the vector f(x) =
More information2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.
2. LINEAR ALGEBRA Outline 1. Definitions 2. Linear least squares problem 3. QR factorization 4. Singular value decomposition (SVD) 5. Pseudo-inverse 6. Eigenvalue decomposition (EVD) 1 Definitions Vector
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview
More informationOrthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6
Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationLecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra
Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationChapter 6 - Orthogonality
Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/
More informationLinear Regression. Aarti Singh. Machine Learning / Sept 27, 2010
Linear Regression Aarti Singh Machine Learning 10-701/15-781 Sept 27, 2010 Discrete to Continuous Labels Classification Sports Science News Anemic cell Healthy cell Regression X = Document Y = Topic X
More informationMATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.
MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. Orthogonality Definition 1. Vectors x,y R n are said to be orthogonal (denoted x y)
More informationVector Spaces, Orthogonality, and Linear Least Squares
Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ
More informationLeast squares problems Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application April 8, 018 1 Least Squares Problems 11 Least Squares Problems What do you do when Ax = b has no solution? Inconsistent systems arise often in applications
More informationThe Full-rank Linear Least Squares Problem
Jim Lambers COS 7 Spring Semeseter 1-11 Lecture 3 Notes The Full-rank Linear Least Squares Problem Gien an m n matrix A, with m n, and an m-ector b, we consider the oerdetermined system of equations Ax
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationMain matrix factorizations
Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More information1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?
. Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in
More information. = V c = V [x]v (5.1) c 1. c k
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,
More informationUCSD ECE269 Handout #8 Prof. Young-Han Kim Wednesday, February 7, Homework Set #4 (Due: Wednesday, February 21, 2018)
UCSD ECE269 Handout #8 Prof. Young-Han Kim Wednesday, February 7, 2018 Homework Set #4 (Due: Wednesday, February 21, 2018) 1. Almost orthonormal basis. Let u 1,u 2,...,u n form an orthonormal basis for
More informationLecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018
Lecture 13: Orthogonal projections and least squares (Section 3.2-3.3) Thang Huynh, UC San Diego 2/9/2018 Orthogonal projection onto subspaces Theorem. Let W be a subspace of R n. Then, each x in R n can
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationIV. Matrix Approximation using Least-Squares
IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationLeast Squares. Stephen Boyd. EE103 Stanford University. October 28, 2017
Least Squares Stephen Boyd EE103 Stanford University October 28, 2017 Outline Least squares problem Solution of least squares problem Examples Least squares problem 2 Least squares problem suppose m n
More informationThe Singular Value Decomposition
The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationLeast-Squares Fitting of Model Parameters to Experimental Data
Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193 Outline of talk What about Science and Scientific
More informationSingular Value Decomposition and Principal Component Analysis (PCA) I
Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationCheng Soon Ong & Christian Walder. Canberra February June 2017
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III
More informationManning & Schuetze, FSNLP (c) 1999,2000
558 15 Topics in Information Retrieval (15.10) y 4 3 2 1 0 0 1 2 3 4 5 6 7 8 Figure 15.7 An example of linear regression. The line y = 0.25x + 1 is the best least-squares fit for the four points (1,1),
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationConsider a subspace V = im(a) of R n, where m. Then,
5.4 LEAST SQUARES AND DATA FIT- TING ANOTHER CHARACTERIZATION OF ORTHOG- ONAL COMPLEMENTS Consider a subspace V = im(a) of R n, where A = [ ] v 1 v 2... v m. Then, V = { x in R n : v x = 0, for all v in
More informationCHAPTER 7. Regression
CHAPTER 7 Regression This chapter presents an extended example, illustrating and extending many of the concepts introduced over the past three chapters. Perhaps the best known multi-variate optimisation
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationLecture Notes for EE263
Lecture Notes for EE263 Stephen Boyd Introduction to Linear Dynamical Systems Autumn 28-9 Copyright Stephen Boyd. Limited copying or use for educational purposes is fine, but please acknowledge source,
More informationMath 3191 Applied Linear Algebra
Math 191 Applied Linear Algebra Lecture 16: Change of Basis Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/0 Rank The rank of A is the dimension of the column space
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics
More informationMathematical Methods
Course description Grading Mathematical Methods Course Overview Carles Batlle Arnau (carles.batlle@upc.edu) Departament de Matemàtica Aplicada 4 and Institut d Organització i Control de Sistemes Industrials
More informationRecall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:
Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationSection 6.2, 6.3 Orthogonal Sets, Orthogonal Projections
Section 6. 6. Orthogonal Sets Orthogonal Projections Main Ideas in these sections: Orthogonal set = A set of mutually orthogonal vectors. OG LI. Orthogonal Projection of y onto u or onto an OG set {u u
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationCS540 Machine learning Lecture 5
CS540 Machine learning Lecture 5 1 Last time Basis functions for linear regression Normal equations QR SVD - briefly 2 This time Geometry of least squares (again) SVD more slowly LMS Ridge regression 3
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationLinear algebra for computational statistics
University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j
More informationMath 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections
Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationσ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2
HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For
More informationManning & Schuetze, FSNLP, (c)
page 554 554 15 Topics in Information Retrieval co-occurrence Latent Semantic Indexing Term 1 Term 2 Term 3 Term 4 Query user interface Document 1 user interface HCI interaction Document 2 HCI interaction
More information