MS&E 318 (CME 338) Large-Scale Numerical Optimization. Generalized Least Squares

Size: px
Start display at page:

Download "MS&E 318 (CME 338) Large-Scale Numerical Optimization. Generalized Least Squares"

Transcription

1 Stanford University, Dept of Management Science and Engineering MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2010 Project Due Wednesday June 9 Generalized Least Squares Suppose A R m n and b R m. In [1], Chris Paige gives a numerically stable algorithm for solving the generalized least squares problem, in which the linear model b = Ax+w has an unknown noise vector w with mean zero and covariance W. If W is positive definite, an estimate of x may be thought of as solving the optimization problem min (Ax b) T W 1 (Ax b). (1) x Clearly we should expect numerical difficulty if W is ill-conditioned. In order to derive a well-posed version of the problem, consider the linear system ( W A A T ) ( y x ) = ( b 0 ). (2) Furthermore, suppose we obtain the Cholesky factorization W = LL T, where L is lower triangular and nonsingular (since W is positive definite), and consider the optimization problem min x, v v T v subject to Ax + Lv = b, (3) where v = L T y, whose solution is given by A T x 0 I L T v = 0. (4) A L y b This can be rearranged to look more like (2): L A y b L T I v = 0. (5) A T x 0 The GLS problem (1) is solved by all of (2) (5). One of our aims is to find if system (2) is adequate, or if system (5) has an accuracy advantage. Paige s algorithm [1] applies to dense data and provides an efficient and stable method for solving (5). Two sequences of plane rotations Q and P are applied from the left and right respectively to reduce ( b A ) to lower-triangular form while maintaining the triangularity of L. They eliminate one superdiagonal at a time, starting at top-right of A. (The rotations are interleaved. The first Q rotation operates on the top two rows to reduce A 1n to zero. The next two Q rotations operate on rows 2 and 3 to reduce A 2n to zero, and then on rows 1 and 2 to eliminate A 1,n 1, working backwards up the superdiagonal. Similarly for all superdiagonals. Each Q rotation generates one superdiagonal element in L, and the next P rotation promptly restores L to lower-triangular form.) 1

2 2 MS&E 318 (CME 338) Large-Scale Numerical Optimization Files The following files are in the class directory msande318/homework/projectgls: WLSprob.m WLS.m GLS.m compare.m paige-sinum-1979-gls.pdf [A,W,b,x,y] = WLSprob( m,n,conda,condw ) generates sparse data A and W of specified dimensions and condition, along with rhs vector b, such that problem (2) is solved by (x, y). [x,y] = WLS( A,W,b ) solves (2) directly and gives an indication of accuracy by printing t and s for the residuals t = b Ax W y and s = A T y. [x,y] = GLS( A,W,b ) solves (5) directly and again prints t and s. compare( m,n,conda,condw ) runs several solvers on the same test problem and prints the errors in x and y. Questions 1. If W is not too ill-conditioned, problem (1) can be solved by ordinary leastsquares (with the help of W = LL T ). Write function [x,y] = OLS( A,W,b ) for this purpose. Test it on a sequence of increasingly ill-conditioned examples. You may do this by calling compare( 50,30,1e+1,1e+1 ) compare( 50,30,1e+2,1e+2 ) compare( 50,30,1e+3,1e+3 )... until you think it makes no sense to have more ill-conditioned data. Summarize (in words) the results that you see. 2. Systems (2) and (5) may be well defined even if W is singular. A special case of singularity would be when the last q rows and columns of W are zero: ( ) ( ) ( ) W1 A1 b1 W =, A =, b =. (6) 0 What kind of problem does the GLS problem (1) or (2) become? 3. Luckily we can persuade Matlab s sparse Cholesky routine to proceed successfully on a singular W of this kind. Function GLS already allows for this. Write new functions WLSprobq and compareq that generate problems with q zero rows and columns in W as in (6) and compare methods WLS and GLS. A 2 b 2

3 Spring 2010 Project 3 WLS.m function [x,y] = WLS( A,W,b ) % [x,y] = WLS( A,W,b ); % solves the weighted least-squares problem whose solution x % is defined by % [W A](y) = (b), % where W is symmetric and positive definite (or semidefinite?). % 25 May 2010: First version, for MS&E318 (CME338) class project. % Michael Saunders, MS&E, Stanford. % Just use backslash and a step of refinement. [m,n] = size(a); m1 = m+1; mn = m+n; K2 = [W A A sparse(n,n)]; c1 = [b; zeros(n,1)]; z1 = K2\c1; y1 = z1(1:m); x1 = z1(m1:mn); r1 = b - A*x1; t1 = r1 - W*y1; s1 = -A *y1; fprintf( \nm =%5i n =%5i\n b-ax-wy A y\n, m,n) fprintf( WLS %8.1e %8.1e\n, norm(t1,inf), norm(s1,inf)) % Refinement. c2 = [t1; s1]; z2 = K2\c2; dy = z2(1:m); dx = z2(m1:mn); x2 = x1 + dx; y2 = y1 + dy; r2 = b - A*x2; t2 = r2 - W*y2; s2 = -A *y2; fprintf( Refine %8.1e %8.1e\n, norm(t2,inf), norm(s2,inf)) x = x2; y = y2;

4 4 MS&E 318 (CME 338) Large-Scale Numerical Optimization GLS.m function [x,y] = GLS( A,W,b ) % [x,y] = GLS( A,W,b ); % solves the weighted least-squares problem defined by % [W A](y) = (b), % where W is symmetric and positive definite (or semidefinite?). % It follows the Generalized Least Squares approach of Paige (1979) % in working with W = L*L and solving the linear system % [ L A](y) (b) % [L -I ](v) = (0). % 25 May 2010: First version, for MS&E318 (CME338) class project. % Michael Saunders, MS&E, Stanford. % Use backslash on the bigger system and a step of refinement. % % Main reference: % C. C. Paige (1979). % Fast numerically stable computations for % generalized linear least squares problems, % SIAM J. Numer. Anal. 16(1), [m,n] = size(a); m1 = m+1; mm = m+m; mm1 = mm+1; mmn = m+m+n; [L,p] = chol(w, lower ); if p==0 rankl = m; else rankl = p-1; Lsave = L; L = sparse(m,m); L(1:rankL,1:rankL) = Lsave; end % L is m x m and nonsingular. % L is m x rankl. % L is m x m again (but singular). K3 = [sparse(m,m) L A L -speye(m) sparse(m,n) A sparse(n,m) sparse(n,n)]; c1 = [b; zeros(m,1); zeros(n,1)]; z1 = K3\c1; y1 = z1(1:m); v1 = z1(m1:mm); x1 = z1(mm1:mmn); r1 = b - A*x1; t1 = r1 - W*y1; s1 = -A *y1;

5 Spring 2010 Project 5 fprintf( \nm =%5i n =%5i\n b-ax-wy A y\n, m,n) fprintf( GLS %8.1e %8.1e\n, norm(t1,inf), norm(s1,inf)) % Refinement. c2 = [r1-l*v1; v1-l *y1; s1]; z2 = K3\c2; dy = z2(1:m); dv = z2(m1:mm); % Not used dx = z2(mm1:mmn); x2 = x1 + dx; y2 = y1 + dy; r2 = b - A*x2; t2 = r2 - W*y2; s2 = -A *y2; fprintf( Refine %8.1e %8.1e\n, norm(t2,inf), norm(s2,inf)) x = x2; y = y2; compare.m function compare( m,n,conda,condw ) % compare( m,n,conda,condw ) % generates m x n data A and m x m positive definite W % and compares 3 methods for solving the Generalized Least Squares problem % [W A](y) = (b), % where W is symmetric and positive definite. % 25 May 2010: First version, for MS&E318 (CME338) class project. % Michael Saunders, MS&E, Stanford. [A,W,b,x,y] = WLSprob( m,n,conda,condw ); [xo,yo] = OLS( A,W,b ); xerro = norm(x-xo,inf); yerro = norm(y-yo,inf); [xw,yw] = WLS( A,W,b ); xerrw = norm(x-xw,inf); yerrw = norm(y-yw,inf); [xg,yg] = GLS( A,W,b ); xerrg = norm(x-xg,inf); yerrg = norm(y-yg,inf); fprintf( \n xerr yerr\n ) fprintf( OLS %8.1e %8.1e\n, xerro,yerro) fprintf( WLS %8.1e %8.1e\n, xerrw,yerrw) fprintf( GLS %8.1e %8.1e\n, xerrg,yerrg)

6 6 MS&E 318 (CME 338) Large-Scale Numerical Optimization Application In [2], Paige applied his generalized least-squares approach to the block-structured problems arising in Kalman filtering and information filtering. The aim is to estimate the state vector in dynamic linear models when new measurements are gathered at each time step. Just as the GLS approach allows W above to be singular, the resulting filtering implementation allows various covariance or inverse covariance matrices to be singular or ill-conditioned. It is the only approach that does so. References [1] C. C. Paige. Fast numerically stable computations for generalized linear least squares problems. SIAM J. Numer. Anal., 16(1): , [2] C. C. Paige. Covariance matrix representation in linear filtering. Contemporary Mathematics, 47: , 1985.

MS&E 318 (CME 338) Large-Scale Numerical Optimization. A Lasso Solver

MS&E 318 (CME 338) Large-Scale Numerical Optimization. A Lasso Solver Stanford University, Dept of Management Science and Engineering MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2011 Final Project Due Friday June 10 A Lasso Solver

More information

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

18.06 Problem Set 1 - Solutions Due Wednesday, 12 September 2007 at 4 pm in

18.06 Problem Set 1 - Solutions Due Wednesday, 12 September 2007 at 4 pm in 18.6 Problem Set 1 - Solutions Due Wednesday, 12 September 27 at 4 pm in 2-16. Problem : from the book.(5=1+1+1+1+1) (a) problem set 1.2, problem 8. (a) F. A Counterexample: u = (1,, ), v = (, 1, ) and

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

CS 219: Sparse matrix algorithms: Homework 3

CS 219: Sparse matrix algorithms: Homework 3 CS 219: Sparse matrix algorithms: Homework 3 Assigned April 24, 2013 Due by class time Wednesday, May 1 The Appendix contains definitions and pointers to references for terminology and notation. Problem

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Sparse least squares and Q-less QR

Sparse least squares and Q-less QR Notes for 2016-02-29 Sparse least squares and Q-less QR Suppose we want to solve a full-rank least squares problem in which A is large and sparse. In principle, we could solve the problem via the normal

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

COMPUTER SCIENCE 515 Numerical Linear Algebra SPRING 2006 ASSIGNMENT # 4 (39 points) February 27

COMPUTER SCIENCE 515 Numerical Linear Algebra SPRING 2006 ASSIGNMENT # 4 (39 points) February 27 Due Friday, March 1 in class COMPUTER SCIENCE 1 Numerical Linear Algebra SPRING 26 ASSIGNMENT # 4 (9 points) February 27 1. (22 points) The goal is to compare the effectiveness of five different techniques

More information

CSC 336F Assignment #3 Due: 24 November 2017.

CSC 336F Assignment #3 Due: 24 November 2017. CSC 336F Assignment #3 Due: 24 November 2017. This assignment is due at the start of the class on Friday, 24 November 2017. For the questions that require you to write a MatLab program, hand-in the program

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1 Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two

More information

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization Section 3.5 LU Decomposition (Factorization) Key terms Matrix factorization Forward and back substitution LU-decomposition Storage economization In matrix analysis as implemented in modern software the

More information

Qualifying Examination

Qualifying Examination Summer 24 Day. Monday, September 5, 24 You have three hours to complete this exam. Work all problems. Start each problem on a All problems are 2 points. Please email any electronic files in support of

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

A review of sparsity vs stability in LU updates

A review of sparsity vs stability in LU updates A review of sparsity vs stability in LU updates Michael Saunders Dept of Management Science and Engineering (MS&E) Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

Linear Inverse Problems. A MATLAB Tutorial Presented by Johnny Samuels

Linear Inverse Problems. A MATLAB Tutorial Presented by Johnny Samuels Linear Inverse Problems A MATLAB Tutorial Presented by Johnny Samuels What do we want to do? We want to develop a method to determine the best fit to a set of data: e.g. The Plan Review pertinent linear

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Chapter 8 Linear Algebraic Equations

Chapter 8 Linear Algebraic Equations PowerPoint to accompany Introduction to MATLAB for Engineers, Third Edition William J. Palm III Chapter 8 Linear Algebraic Equations Copyright 2010. The McGraw-Hill Companies, Inc. This work is only for

More information

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

ECE295, Data Assimila0on and Inverse Problems, Spring 2015 ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9 Problem du jour Week 3: Wednesday, Jan 9 1. As a function of matrix dimension, what is the asymptotic complexity of computing a determinant using the Laplace expansion (cofactor expansion) that you probably

More information

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0. ) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

More information

AY Term 1 Examination November 2013 ECON205 INTERMEDIATE MATHEMATICS FOR ECONOMICS

AY Term 1 Examination November 2013 ECON205 INTERMEDIATE MATHEMATICS FOR ECONOMICS AY203-4 Term Examination November 203 ECON205 INTERMEDIATE MATHEMATICS FOR ECONOMICS INSTRUCTIONS TO CANDIDATES The time allowed for this examination paper is TWO hours 2 This examination paper contains

More information

8. Least squares. ˆ Review of linear equations. ˆ Least squares. ˆ Example: curve-fitting. ˆ Vector norms. ˆ Geometrical intuition

8. Least squares. ˆ Review of linear equations. ˆ Least squares. ˆ Example: curve-fitting. ˆ Vector norms. ˆ Geometrical intuition CS/ECE/ISyE 54 Introduction to Optimization Spring 017 18 8. Least squares ˆ Review of linear equations ˆ Least squares ˆ Eample: curve-fitting ˆ Vector norms ˆ Geometrical intuition Laurent Lessard (www.laurentlessard.com)

More information

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of

More information

Engineering Computation

Engineering Computation Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

+ MATRIX VARIABLES AND TWO DIMENSIONAL ARRAYS

+ MATRIX VARIABLES AND TWO DIMENSIONAL ARRAYS + MATRIX VARIABLES AND TWO DIMENSIONAL ARRAYS Matrices are organized rows and columns of numbers that mathematical operations can be performed on. MATLAB is organized around the rules of matrix operations.

More information

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11 CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani 1 Question 1 1. Let a ij denote the entries of the matrix A. Let A (m) denote the matrix A after m Gaussian elimination

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Gauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Gauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan Gauss Elimination Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan chanhl@mail.cgu.edu.tw Solving small numbers of equations by graphical method The location of the intercept provides

More information

Experiments with iterative computation of search directions within interior methods for constrained optimization

Experiments with iterative computation of search directions within interior methods for constrained optimization Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements

More information

CE 206: Engineering Computation Sessional. System of Linear Equations

CE 206: Engineering Computation Sessional. System of Linear Equations CE 6: Engineering Computation Sessional System of Linear Equations Gauss Elimination orward elimination Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Linear Algebra: A Constructive Approach

Linear Algebra: A Constructive Approach Chapter 2 Linear Algebra: A Constructive Approach In Section 14 we sketched a geometric interpretation of the simplex method In this chapter, we describe the basis of an algebraic interpretation that allows

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

AM205: Assignment 2. i=1

AM205: Assignment 2. i=1 AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

Homework 5 - Solutions

Homework 5 - Solutions Spring Math 54 Homework 5 - Solutions BF 3.4.4. d. The spline interpolation routine below produces the following coefficients: i a i b i c i d i -..869948.75637848.656598 -.5.9589.487644.9847639.887.9863.34456976.489747

More information

MTH 215: Introduction to Linear Algebra

MTH 215: Introduction to Linear Algebra MTH 215: Introduction to Linear Algebra Lecture 5 Jonathan A. Chávez Casillas 1 1 University of Rhode Island Department of Mathematics September 20, 2017 1 LU Factorization 2 3 4 Triangular Matrices Definition

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

18.085, PROBLEM SET 1 SOLUTIONS

18.085, PROBLEM SET 1 SOLUTIONS 18.085, PROBLEM SET 1 SOLUTIONS Question 1. (20 pts.) Using odd values of n, the error is on the order of machine precision, about 10 16, for smaller values of n. We expect this because we are doing finite

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Linear Systems of Equations. ChEn 2450

Linear Systems of Equations. ChEn 2450 Linear Systems of Equations ChEn 450 LinearSystems-directkey - August 5, 04 Example Circuit analysis (also used in heat transfer) + v _ R R4 I I I3 R R5 R3 Kirchoff s Laws give the following equations

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Intel Math Kernel Library (Intel MKL) LAPACK

Intel Math Kernel Library (Intel MKL) LAPACK Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue

More information

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 Illustration of Gaussian elimination to find LU factorization. A = a 21 a a a a 31 a 32 a a a 41 a 42 a 43 a 1 Compute multipliers : Eliminate entries in first column: m i1 = a i1 a 11, i = 2, 3, 4 ith

More information

Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0

Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0 Lecture # 5 The Linear Least Squares Problem Let X R m n,m n be such that rank(x = n That is, The problem is to find y LS such that We also want Xy =, iff y = b Xy LS 2 = min y R n b Xy 2 2 (1 r LS = b

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

MARN 5898 Regularized Linear Least Squares.

MARN 5898 Regularized Linear Least Squares. MARN 5898 Regularized Linear Least Squares. Dmitriy Leykekhman Spring 2010 Goals Understanding the regularization. D. Leykekhman - MARN 5898 Parameter estimation in marine sciences Linear Least Squares

More information

MAT 343 Laboratory 3 The LU factorization

MAT 343 Laboratory 3 The LU factorization In this laboratory session we will learn how to MAT 343 Laboratory 3 The LU factorization 1. Find the LU factorization of a matrix using elementary matrices 2. Use the MATLAB command lu to find the LU

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 4: The Primal Simplex Method 1 Linear

More information

Chapter 8 Gauss Elimination. Gab-Byung Chae

Chapter 8 Gauss Elimination. Gab-Byung Chae Chapter 8 Gauss Elimination Gab-Byung Chae 2008 5 19 2 Chapter Objectives How to solve small sets of linear equations with the graphical method and Cramer s rule Gauss Elimination Understanding how to

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2017 Notes 10: MINOS Part 1: the Reduced-Gradient

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

MATH 612 Computational methods for equation solving and function minimization Week # 2

MATH 612 Computational methods for equation solving and function minimization Week # 2 MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Direct and Incomplete Cholesky Factorizations with Static Supernodes

Direct and Incomplete Cholesky Factorizations with Static Supernodes Direct and Incomplete Cholesky Factorizations with Static Supernodes AMSC 661 Term Project Report Yuancheng Luo 2010-05-14 Introduction Incomplete factorizations of sparse symmetric positive definite (SSPD)

More information

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A = Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss

More information

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Linear Algebra Review 1 / 16 Midterm Exam Tuesday Feb

More information