Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Similar documents
Section Matrices and Systems of Linear Eqns.

Chapter 4. Solving Systems of Equations. Chapter 4

Matrix decompositions

1 GSW Sets of Systems

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebraic Equations

Matrix decompositions

Solving Ax = b w/ different b s: LU-Factorization

POLI270 - Linear Algebra

lecture 2 and 3: algorithms for linear algebra

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

2.1 Gaussian Elimination

Simultaneous Linear Equations

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Matrices. A matrix is a method of writing a set of numbers using rows and columns. Cells in a matrix can be referenced in the form.

MODEL ANSWERS TO THE THIRD HOMEWORK

lecture 3 and 4: algorithms for linear algebra

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

Numerical Methods Lecture 2 Simultaneous Equations

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

MIDTERM 1 - SOLUTIONS

Linear Equations and Matrix

4 Elementary matrices, continued

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Computational Methods. Systems of Linear Equations

Linear Systems of n equations for n unknowns

Next topics: Solving systems of linear equations

1.5 Gaussian Elimination With Partial Pivoting.

Review Questions REVIEW QUESTIONS 71

Solution of Linear Equations

The Solution of Linear Systems AX = B

Numerical Analysis Fall. Gauss Elimination

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Matrix Factorization Reading: Lay 2.5

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

1300 Linear Algebra and Vector Geometry

Definition of Equality of Matrices. Example 1: Equality of Matrices. Consider the four matrices

Numerical Linear Algebra

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

Gaussian Elimination and Back Substitution

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

TMA4125 Matematikk 4N Spring 2017

Numerical Methods Lecture 2 Simultaneous Equations

Cheat Sheet for MATH461

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

1.Chapter Objectives

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Lecture 2 Systems of Linear Equations and Matrices, Continued

4 Elementary matrices, continued

7.6 The Inverse of a Square Matrix

Solving Systems of Linear Equations

22A-2 SUMMER 2014 LECTURE 5

Math 1314 Week #14 Notes

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Exercise Sketch these lines and find their intersection.

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Solving Consistent Linear Systems

A Review of Matrix Analysis

MH1200 Final 2014/2015

Introduction to Matrices

Linear Algebra, Summer 2011, pt. 2

1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic

Engineering Computation

Chapter 9: Gaussian Elimination

UNIT 3 INTERSECTIONS OF LINES AND PLANES

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Elementary Linear Algebra

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Learning Module 1 - Basic Algebra Review (Appendix A)

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Practical Linear Algebra: A Geometry Toolbox

Math 250B Midterm I Information Fall 2018

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Scientific Computing: Dense Linear Systems

Math 313 Chapter 1 Review

Chapter 2 Notes, Linear Algebra 5e Lay

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Chapter 6 Page 1 of 10. Lecture Guide. Math College Algebra Chapter 6. to accompany. College Algebra by Julie Miller

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Solving Linear Systems of Equations

Section 6.2 Larger Systems of Linear Equations

MTH 464: Computational Linear Algebra

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

SOLVING LINEAR SYSTEMS

Numerical Methods for Chemical Engineers

Matrices and systems of linear equations

Extra Problems: Chapter 1

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Gauss-Jordan Row Reduction and Reduced Row Echelon Form

Transcription:

Ch.10 Numerical Applications 10-1 Least-squares Start with Self-test 10-1/459. Linear equation Y = ax + b Error function: E = D 2 = (Y - (ax+b)) 2 Regression Formula: Slope a = (N ΣXY - (ΣX)(ΣY)) / (N ΣX 2 - (ΣX) 2 ) Intercept b = (ΣY - a (ΣX)) / N Extra-credit: How meny operations total are required to calculate a and b according to the fmlas. above? (Nothing is computed twice) A: a 6N+3, b 3

Problems with linear models Anscombe s quartet: These 4 data sets have the same mean, variance for x, variance for y, median and correlation coeff.! [Wikipedia] Beyond linear: Polynomial models o Solve linear system! Logarithmic models Y = b X a o Take ln, then use linear regression! MS Excel functions for linear and logarithmic regression Spreadsheet available on webpage

10-3 Matrix Operations Scalar multiplication Matrix add., sub. Dot product (vector mult.) Matrix mult. Self-test 10-2 / 472 ------------------------------------------------------------------------------

10-4 Linear Systems and Gaussian elimination Linear equation The only operations allowed are multiplication with constants and addition. ( Subtraction also allowed: x y = x + (-1)y.) A system of eqns. is called linear if all its equations are linear. Examples of linear and non-linear systems: Matrix form: Ax = b (uppercase for matrices, lowercase for vectors) In this class we re considering only the case where the # of unknowns equals the # of eqns. What does this imply about A, x and b?

Even when the # of unknowns equals the # of eqns., there is no guaranteed unique solution 3 cases: Unique solution No solution (overdetermined) Infinite # of solutions (underdetermined) Illustrations for 2x2 systems Illustrations for 3x3 systems see text Conclusion: A unique solution exists No row of A is linear combination of other rows No column of A is linear combination of other columns (square A!) Important idea: bringing a matrix to (upper) triangular form!

Handout: Bring this matrix to upper triangular form: ----------------------------------------------------------------------------------

Continuing 10-4 Linear Systems and Gaussian elimination This program shows how to eliminate coefficients from a matrix. Only the first TWO elementary row operations are performed from the Gaussian elimination algorithm.

Since no pivoting is performed, we can see very early on the problem that develops: The original matrix elements were in the range 1..11, but note the increase in the magnitudes! -100.36 Now we let the Gaussian elimination run through the entire matrix:

In order for the solution to work, the vector of right-hand-sides b must undergo the same row operations as the matrix A. Handout: Write the augmented matrix A = Bring the augmented matrix A to upper triangular form: Calculate z: Calculate y through backsubstitution in the second eq: Calculate x through backsubstitution in the first eq:

Changes in the code to implement augmented matrix: Complexity for triangularizing augmented Nx(N+1) matrix? O(N 3 ) Complexity for backsubstitution? O(N 2 ) Conclusion: complexity for Gaussian algorithm overall? O(N 3 )

Continuing 10-4 Linear Systems and Gaussian elimination Repeated row operations can lead to numerical instability (see screenshots above!) Which part of the Gaussian algorithm is responsible? Idea: Try to use the largest pivot available (in absolute value), in order to make the multiplier as small as possible (in absolute value), a.k.a. pivoting. o Partial pivoting o Complete pivoting

New code for partial pivoting: Since each column has one and only one pivot, I m renaming the column index PIVOT And finally the payoff:

Handout: Write the augmented matrix A = Bring the augmented matrix A to upper triangular form using partial pivoting: Compare the triangular matrix to that from last time Calculate z: Calculate y through backsubstitution in the second eq: Calculate x through backsubstitution in the first eq: (Is the final solution the same as that from last time?)

Individual work for next time: Solve in notebook the Hand Example on pp.479-480. How about total pivoting? Swapping columns can lead to an even larger pivot, but the price to pay is that the unknowns get swapped, too. We have to keep track of these swaps, and unswap the solution vector at the end. Empirical conclusion: partial pivoting is good enough in most practical problems. If the matrix A is particularly ill-conditioned, we can work a little harder and do complete pivoting.

Gauss or Gauss-Jordan? (Not in text) [Source: http://math.comsci.us/matrix/jordan.shtml ] Problems with Gauss-Jordan: Additional numerical instabilities are introduced by the additional divisions by pivots! Clearing the upper part of the matrix practically triples the # of operations: approx. N 3 /3 for Gauss, N 3 for Gauss-Jordan. Suggested additional reading for Gauss and Gauss-Jordan: Sections 2.1 and 2.2 of Numerical Recipes (link on webpage). This are not Big-Oh, since we fill in the exact multipliers, 1/3 and 1, respectively!

Weakness shared by Gauss and Gauss-Jordan algorithms: How to deal with multiple right-had-side vectors b? And now the clincher: What if not all the RHSs are not available at the same time?

More advanced linear solving with the LU decomposition [not in text] Explaining the concept of decomposition: The matrix is stored as an expression involving smaller/simpler parts of itself, e.g. A = LU (LU decomp.) A = N -1 JN (Jordan decomposition) A = U V (singular-value decomp.) A = ab T (decomposition for rank-1 matrices) A = AU -1 V T A (biconjugate decomp.) Etc. etc. The important thing to understand about any decomposition is that, once the decomposed parts have been calculated, they can be stored and used at any time to represent the original matrix A. The decomposition need not be performed multiple times! Advantage: the RHSs do not need to be all present up-front, as for Gaussian elim. Disadvantage: How do we solve the system Ax = b if A = LU? o LUx = b o Call Ux y Ly = b solve for y with forward substitution, since L is Lower triangular o Once we know y, solve Ux = y as in Gaussian elimination, with back-substitution. Conclusion: forward and back-substitution but they are both O(N 2 )! Complexity: Next time we ll show that the matrices L and U can be found in O(N 3 ), actually N 3 /3, just like Gaussian elimination! Then it s just O(N 2 ) for solving Ax = b for each RHS.

[Handout] Consider the following LU decomposition: L = 1 0 0 U = 2 3 4 3 1 0 0 1 1 2 4 1 0 0 5 Show with pencil and paper how to solve the system Ax = b by forward substitution, followed by backsubstitution, where A = LU b = 1 0 2

A note on L s diagonal: L and U together have N 2 + N non-zero elements. Because the original A has only N 2, we can choose the remaining N freely. Most implementations of LU decomp. choose to make all diag. elements of L ones. In this case, we don t even need two matrices, we can store both L and U in place in the old matrix A! Recommended reading: Section 2.3 of Numerical Recipes (link on webpage) Wikipedia article (link on webpage) ------------------------------------------------------------------------------