CE 206: Engineering Computation Sessional. System of Linear Equations

Similar documents
Gauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Chapter 8 Gauss Elimination. Gab-Byung Chae

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Numerical Analysis Fall. Gauss Elimination

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Linear System of Equations

, -,."-~,.,,.._..Numerical Analysis: Midterm Exam. SoL.,---~~-,

Linear System of Equations

MA2501 Numerical Methods Spring 2015

Linear Algebraic Equations

Chapter 9: Gaussian Elimination

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Process Model Formulation and Solution, 3E4

1.Chapter Objectives

The Solution of Linear Systems AX = B

Simultaneous Linear Equations

Simultaneous Linear Equations

Lecture 5. Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 UVT. Lecture 5. Linear Systems. show Practical Problem.

A Review of Matrix Analysis

MATH 3511 Lecture 1. Solving Linear Systems 1

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

A Simple Problem Which Students Can Solve and Check Using an Inexpensive Calculator

Engineering Computation

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Linear Systems of Equations. ChEn 2450

Linear Algebraic Equations

6 Linear Systems of Equations

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Direct Methods for solving Linear Equation Systems

Solving Linear Systems Using Gaussian Elimination. How can we solve

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Computational Methods. Systems of Linear Equations

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group

Review Questions REVIEW QUESTIONS 71

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Computational Economics and Finance

Scientific Computing: Dense Linear Systems

Numerical Linear Algebra

MAT 343 Laboratory 3 The LU factorization

LINEAR SYSTEMS (11) Intensive Computation

Solving Linear Systems of Equations

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Solving a System of Equations

Matrix decompositions

Applied Linear Algebra in Geoscience Using MATLAB

Vector Mechanics: Statics

Solving linear equations with Gaussian Elimination (I)

Numerical Analysis FMN011

18.06 Problem Set 2 Solution

Thomas Algorithm for Tridiagonal Matrix

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Scientific Computing

6. Vectors. Given two points, P 0 = (x 0, y 0 ) and P 1 = (x 1, y 1 ), a vector can be drawn with its foot at P 0 and

Matrix decompositions

Introduction to Mathematical Programming

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

SOLVING LINEAR SYSTEMS

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebra. Solving SLEs with Matlab. Matrix Inversion. Solving SLE s by Matlab - Inverse. Solving Simultaneous Linear Equations in MATLAB

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Numerical Linear Algebra

Solving linear systems (6 lectures)

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

Iterative Methods. Splitting Methods

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

1 Lecture 8: Interpolating polynomials.

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Engineering Mechanics: Statics in SI Units, 12e

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

Numerical Methods I: Numerical linear algebra

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Lab 1: Iterative Methods for Solving Linear Systems

Numerical Methods for Chemical Engineers

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

MATH2071: LAB #5: Norms, Errors and Condition Numbers

MTH 464: Computational Linear Algebra

Next topics: Solving systems of linear equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

The following steps will help you to record your work and save and submit it successfully.

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

AP Physics: Newton's Laws 2

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Transcription:

CE 6: Engineering Computation Sessional System of Linear Equations

Gauss Elimination orward elimination Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient from the second row and beyond. Continue this process with the second row to remove the second coefficient from the third row and beyond. Stop when an upper triangular matrix remains. Back substitution Starting with the last row, solve for the unknown, then substitute that value into the next highest row. Because of the upper-triangular nature of the matrix, each row will contain only one more unknown.

Gauss Elimination code function x = GaussNaive(A,b) % input: A = coefficient matrix b = right hand side vector % output: x = solution vector [m,n] = size(a); if m~=n, error('matrix A must be square'); end nb = n+; Aug = [A b]; % forward elimination for k = :n- for i = k+:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)-factor*Aug(k,k:nb); end end % back substitution x = zeros(n,); x(n) = Aug(n,nb)/Aug(n,n); for i = n-:-: x(i) = (Aug(i,nb)-Aug(i,i+:n)*x(i+:n))/Aug(i,i); end

Partial Pivoting Problems with naïve Gauss elimination: a coefficient along the diagonal is or close to Solution: determine the coefficient with the largest absolute value in the column below the pivot element. Switch the rows so that the largest element is the pivot element. Class Exercise: Implement partial pivoting in your GaussNaive.m function file Also modify the code in a way so that you can display the matrix after each operation.

Tridiagonal systems A tridiagonal system is a banded system with a bandwidth of : f g e f g e f g e n f n g n e n f n x x x x n x n r r r r n r n Tridiagonal systems can be solved using the same method as Gauss elimination, but with much less effort.

Tridiagonal solver function x = Tridiag(e,f,g,r) % input: % e = subdiagonal vector % f = diagonal vector % g = superdiagonal vector % r = right hand side vector % output: % x = solution vector n=length(f); % forward elimination for k = :n factor = e(k)/f(k-); f(k) = f(k) - factor*g(k-); r(k) = r(k) - factor*r(k-); end % back substitution x(n) = r(n)/f(n); for k = n-:-: x(k) = (r(k)-g(k)*x(k+))/f(k); end

LU actorization The main advantage is once [A] is decomposed, the same [L] and [U] can be used for multiple {b} vectors.

Built-in function lu()and / operator To solve [A]{x}={b}, first decompose [A] to get [L][U]{x}={b} [L, U] = lu(a) Set up and solve [L]{d}={b}, where {d} can be found using forward substitution. d = L\b Set up and solve [U]{x}={d}, where {x} can be found using backward substitution. x = U\d

Decomposition vs. Inverse Inverting a matrix is computationally expensive

Exercise : Truss Analysis ind out all the bar forces and the external reactions of the following truss: lb H 9 Node : cos 6 6 sin sin 6 cos V V Node : Node : cos H sin V sin 6 V cos 6

Exercise : Truss Analysis.866.5.5.866.866.5.5.866 V V H Unknowns External forcing 75 5 866 4 5 V V H

Exercise : Truss Analysis External forces have no effect on LU decomposition, therefore it need not be implemented over and over again for a different external force Example: Analyze the same truss with two horizontal lb forces due to wind lb Just change the forcing matrix and solve lb H H 866 9 6 V V V 5 4 V 5 4

Exercise : Body in motion Three blocks are connected by a weightless cord and rest on an inclined plane. Develop the set of three simultaneous equations and solve for acceleration and tensions in the cable. riction factor =.75 45 riction factor =.5 [Use your free body diagram concept from analytic mechanics and apply Newton s second law]

Exercise : A civil engineer involved in construction requires 48, 58 and 57 m of sand, fine gravel and coarse gravel respectively, for a building project. There are three pits from which these materials can be obtained. The composition of these pits is Sand (%) ine Gravel(%) Coarse gravel (%) Pit 55 5 Pit 5 45 Pit 5 55 How many cubic meters must be hauled from each pit in order to meet the engineer s needs?

Exercise 4: Spring Problem The following figure shows an arrangement of four springs in series being depressed with a force of = 5 kg. Develop the force balance equations at equilibrium. Solve for the displacements if k =, k = 5, k = 8, k 4 = N/m. x 4 x x x k 4 k k k

Other engineering applications Electrical Circuits Ω 5Ω V =V 5Ω Ω i 4 i i 5 i 4 5 5Ω Ω 6 V 6 =V i 54 i 65 4 5 6 5 5 i i i i 5i i 5 65 54 4

Other engineering applications Mass-spring system (Steady-state) Stiffness matrix

Matrix condition number The matrix condition number can be used to estimate the precision of solutions of linear algebraic equations. Cond[A] is obtained by calculating Cond[A]= A A - column - sum norm A Matlab built-in function: cond(a, p) max jn n i n robenius norm A f a ij row - sum norm spectral norm ( norm) A n i max in j n j / A max a ij a ij

Matrix condition number An ill-conditioned matrix will have very high condition number. It can be shown that X X Cond A A A Relative error of the norm of the computed solution will be very sensitive to the relative error in norm of coeff. of [A] If the coefficients of [A] are known to t digit precision, the solution [X] may be valid to only t - log (Cond[A]) digits. Example: / / A / / / 4 cond ( A) 45. / / 4 / 5