ASSIGNMENT 1 GAUSS ELIMINATION
|
|
- Eileen Stafford
- 5 years ago
- Views:
Transcription
1 ASSIGNMENT 1 GAUSS ELIMINATION SOURCE CODE: function []= gauss_elim(a,b) [m n]=size(a); [n p]=size(b); if(m==n) if(rank([a b])==rank(a)) f=1; fprintf('the given system of linear equations is consistent'); else fprintf('the given system of linear equations is inconsistent'); fprintf('thus no solution'); if(f==1) X = zeros(n,p); for i=1:n-1 mult=-a(i+1:n,i)/a(i,i); A(i+1:n,:)= A(i+1:n,:)+ mult*a(i,:); %Here mult is the multiplier b(i+1:n,:)= b(i+1:n,:)+ mult*b(i,:); % Back Substitution to find the unknowns X(n,:) = b(n,:)/a(n,n); for i = n-1:-1:1 X(i,:) = (b(i,:) - A(i,i+1:n)*X(i+1:n,:))/A(i,i); fprintf('the solution of the given system of equations is:\n'); disp(x); itr=n*(n+1)/2; fprintf('the number of iterations in which gauss elimination can be performed are: %d\n',itr); else fprintf('gauss Elimination cannot be done'); 1
2 OUTPUT 1: >> A = [ ; ; ; ] A = >> b = [5; 18; -4; 11] b = >> gauss_elim(a,b) The given system of linear equations is consistent. The solution of the given system of equations is: The number of iterations in which gauss elimination can be performed are: 10 2
3 ASSIGNMENT 2 POWER METHOD SOURCE CODE: function []=powereigen(y0,a,tolr) % Power Method % To find largest and smallest eigen values of given square matrix. % Inputs - % Y0 n * 1 matrix (Initial Guess) % A n * n matrix (Given Matrix) % tolr (Tolerance) %**************** Largest Eigen-Value *****************% dd = 1; x = Y0; n = 10; while dd > tolr y = A * x; dd = abs(norm(x) - n); n = norm(x); x = y./ n; lvalue = n %*************** Smallest Eigen-Value *****************% dd = 1; x = Y0; n = 10; while dd>tolr y = A \ x; dd = abs(norm(x) - n); n = norm(x); x = y./ n; if n == 0 sprintf('smallest eigen value not exist.'); else svalue = 1 / n 3
4 OUTPUT 2: >> A = [5-2 0; 1 2-3; 1-2 4] A = >> x = [1; 1; 1] x = >> tolr = tolr = e-03 >> powereigen(x,a,tolr) largest-value = smallest-value =
5 ASSIGNMENT 3 CURVE FITTING SOURCE CODE: % For the given set of points (x,y), the points to be fitted to a % polynomial linear / quadratic / cubic on user input and to plot graph of % the points with the polynomial clc; clear all; %Inputs: X = [0.02;0.03;0.04;0.05;0.06;0.07;0.08;0.09;0.10;0.11;0.13;0.15;0.17]; Y = [0.97;0.70;0.58;0.495;0.42;0.37;0.33;0.30;0.28;0.26;0.24;0.24;0.225]; a = zeros(13,1); f = [0,0.01,0.12,0.14,0.16]; s = sum(x); s2 = sum(x.*x); s3 = sum(x.*x.*x); s4 = sum(x.*x.*x.*x); s5 = sum(x.*x.*x.*x.*x); s6 = sum(x.*x.*x.*x.*x.*x); t = sum(y); t1 = sum(x.*y); t2 = sum(x.*x.*y); t3 = sum(x.*x.*x.*y); disp('welcome'); disp('1. Linear'); disp('2. Quadratic'); disp('3. Cubic'); disp('4. Exit'); ch = input('choose from above - '); disp(' '); while(ch ~= 4) if(ch == 1) A = [13,s;s,s2]; B = [t;t1]; sol = linsolve(a,b); plot(y,x,'--rs') 5
6 hold on for i = 1:13 a(i) = sol(1) + sol(2) * X(i); plot(a,x) hold off disp('linear - '); for i = 1:5 temp =sol(1) + sol(2) * f(i); disp(sprintf('y(%g) = %g',f(i),temp)); elseif(ch == 2) A = [13,s,s2;s,s2,s3;s2,s3,s4]; B = [t;t1;t2]; sol = linsolve(a,b); plot(y,x,'--rs') hold on for i = 1:13 a(i) = sol(1) + sol(2) * X(i) + sol(3) * X(i) * X(i); plot(a,x) hold off disp('quadratic - '); for i = 1:5 temp = sol(1) + sol(2) * f(i) + sol(3) * f(i) * f(i); disp(sprintf('y(%g) = %g',f(i),temp)); elseif(ch == 3) A = [13,s,s2,s3;s,s2,s3,s4;s2,s3,s4,s5;s3,s4,s5,s6]; B = [t;t1;t2;t3]; sol = linsolve(a,b); plot(y,x,'--rs') hold on for i = 1:13 a(i) = sol(1) + sol(2) * X(i) + sol(3) * X(i) * X(i) + sol(4) * X(i) * X(i) * X(i); plot(a,x) hold off disp('cubic - '); for i = 1:5 6
7 temp = sol(1) + sol(2) * f(i) + sol(3) * f(i) * f(i) + sol(4) * f(i) * f(i) * f(i); disp(sprintf('y(%g) = %g',f(i),temp)); else disp('incorrect input.'); disp(' '); disp(' '); disp('1. Linear'); disp('2. Quadratic'); disp('3. Cubic'); disp('4. Exit'); ch = input('choose from above - '); disp(' '); disp('thank You'); OUTPUT 3: Linear - Y(0) = Y(0.01) = Y(0.12) = Y(0.14) = Y(0.16) =
8 Quadratic - Y(0) = Y(0.01) = Y(0.12) = Y(0.14) = Y(0.16) = Cubic - Y(0) = Y(0.01) = Y(0.12) = Y(0.14) = Y(0.16) =
9 ASSIGNMENT 4 GRAM SCHMIDT SOURCE CODE: % the Gram Schmidt process is a method for orthonormalising % a set of vectors in an inner product space %input: %A: input matrix %m,n: size of matrix A %output: %R: matrix obtained by GramSchmidt Method clear all,close all,clc; %input: A=input('Enter matrix you want to apply Gram Schmidt: '); [m,n]=size(a); R=zeros(m,n); Q=A; %solution for j=1:n R(:,j)=Q(:,j); if(j>1) for i=1:j-1 R(:,j)=R(:,j)-sum(Q(:,j).*R(:,i))*R(:,i)/norm(R(:,i))^2;%Applying Formula %output disp('matrix obtained by Gram Schmidt method is :'); disp(' '); disp(r); OUTPUT 4: Enter matrix you want to apply Gram Schmidt: [1 2 3 ; ; 0-2 3] Matrix obtained by Gram Schmidt method is : 9
10
11 ASSIGNMENT 5.1 GAUSS ELIMINATION SOURCE CODE: function []= gauss_elim(a,b) [m n]=size(a); [n p]=size(b); if(m==n) if(rank([a b])==rank(a)) f=1; fprintf('the given system of linear equations is consistent'); else fprintf('the given system of linear equations is inconsistent'); fprintf('thus no solution'); if(f==1) X = zeros(n,p); for i=1:n-1 mult=-a(i+1:n,i)/a(i,i); A(i+1:n,:)= A(i+1:n,:)+ mult*a(i,:); %Here mult is the multiplier b(i+1:n,:)= b(i+1:n,:)+ mult*b(i,:); % Back Substitution to find the unknowns X(n,:) = b(n,:)/a(n,n); for i = n-1:-1:1 X(i,:) = (b(i,:) - A(i,i+1:n)*X(i+1:n,:))/A(i,i); fprintf('the solution of the given system of equations is:\n'); disp(x); itr=n*(n+1)/2; fprintf('the number of iterations in which gauss elimination can be performed are: %d\n',itr); else fprintf('gauss Elimination cannot be done'); 11
12 OUTPUT 5.1: >> A = [ ; ; ; ] A = >> b = [5; 18; -4; 11] b = >> gauss_elim(a,b) The given system of linear equations is consistent. The solution of the given system of equations is: The number of iterations in which gauss elimination can be performed are: 10 12
13 ASSIGNMENT 5.2 LU DECOMPOSITION SOURCE CODE: function []=LU(A,b) [m n]=size(a); [n p]=size(b); X=zeros(n,p); Y=zeros(n,p); L=eye(n); if(m==n) if(rank([a b])==rank(a)) f=1; fprintf('the given system of linear equations is consistent \n'); else fprintf('the given system of linear equations is inconsistent \n'); fprintf('thus no solution'); if f==1 for i=1:n-1 mult=-a(i+1:n,i)/a(i,i); L(i+1:n,i)=-mult; A(i+1:n,:)=A(i+1:n,:)+ mult*a(i,:); U=A; fprintf('the unit lower triangulat matrix "L" is:\n'); disp(l); fprintf('the upper triangulat matrix "U" is:\n'); disp(u); %LUX=b, UX=Y,LY=b %to find Y from LY=b by forward substitution Y(1,:)=b(1,:); for i=2:n Y(i,:)=b(i,:)-L(i,1:i-1)*Y(1:i-1,:); %to find the the solution X of the given system of equations by backword substitution X(n,:)=Y(n,:)/U(n,n); for i=n-1:-1:1 X(i,:)=(Y(i,:)-U(i,i+1:n)*X(i+1:n,:))/U(i,i); 13
14 fprintf('hence the solution of the given system of equations is:\n'); disp(x); itr=(n*(n+1)/2)+n; fprintf('the number of iterations in which gauss elimination can be performed are: %d\n',itr); OUTPUT 5.2: >> A = [ ; ; ; ] A = >> b = [5; 18; -4; 11] b = >> LU(A,b) The given system of linear equations is consistent the unit lower triangulat matrix "L" is: the upper triangulat matrix "U" is:
15 Hence the solution of the given system of equations is: the number of iterations in which gauss elimination can be performed are: 14 15
16 ASSIGNMENT 5.3 SOR METHOD SOURCE CODE: function[]= sor(a, b, N) n = size(a,1); %splitting matrix A into the three matrices L, U and D D = diag(diag(a)); L = tril(-a,-1); U = triu(-a,1); Tj = inv(d)*(l+u); rho_tj = max(abs(eig(tj))); w = 2./(1+sqrt(1-rho_Tj^2)); disp('w ='); disp(w); disp('the rate of convergence is:'); disp(-log10(w-1)); %Jacobi iteration matrix %spectral radius of Tj %optimal overrelaxation parameter Tw = inv(d-w*l)*((1-w)*d+w*u); %SOR iteration matrix cw = w*inv(d-w*l)*b; %constant vector needed for iterations tol = 1e-05; k = 1; x = zeros(n,1); %starting vector while k <= N x(:,k+1) = Tw*x(:,k) + cw; if norm(x(:,k+1)-x(:,k)) <tol disp('the procedure was successful') disp('condition x^(k+1) - x^(k) <tol was met after k iterations') disp(k); disp('x = '); disp(x(:,k+1)); break k = k+1; k=1; if norm(x(:,k+1)- x(:,k)) >tol k > N disp('maximum number of iterations reached without satisfying condition:') disp(' x^(k+1) - x^(k) <tol'); disp(tol); disp('please, examine the sequence of iterates') 16
17 disp('in case you observe convergence, then increase the maximum number of iterations') disp('in case of divergence, the matrix may not be diagonally dominant') fprintf('\n\nthus, the solution of the given system of equations by SOR method is:\n'); disp(x); OUTPUT 5.3: >> A=[3 2 0;2 3-1;0-1 2] A = >> b=[4.5;5;-.5] b = >> N=3 N = 3 >>sor(a,b,n) w = the rate of convergence is: Maximum number of iterations reached without satisfying condition: x^(k+1) - x^(k) <tol e-05 17
18 Please, examine the sequence of iterates In case you observe convergence, then increase the maximum number of iterations In case of divergence, the matrix may not be diagonally dominant Thus, the solution of the given system of equations by SOR method is:
MATH 571: Computational Assignment #2
MATH 571: Computational Assignment #2 Due on Tuesday, November 26, 2013 TTH 12:pm Wenqiang Feng 1 MATH 571 ( TTH 12:pm): Computational Assignment #2 Contents Problem 1 3 Problem 2 8 Page 2 of 9 MATH 571
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationJacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN
Jacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN Vedat Tavsanoglu Yildiz Technical University 9 August 006 1 Paper Outline Raster simulation is an image scanning-processing
More informationThe purpose of computing is insight, not numbers. Richard Wesley Hamming
Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationCE 206: Engineering Computation Sessional. System of Linear Equations
CE 6: Engineering Computation Sessional System of Linear Equations Gauss Elimination orward elimination Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient
More informationMath 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1
Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two
More informationApplied Linear Algebra
Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationLU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...
.. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite
More informationAIMS Exercise Set # 1
AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationChapter 8 Gauss Elimination. Gab-Byung Chae
Chapter 8 Gauss Elimination Gab-Byung Chae 2008 5 19 2 Chapter Objectives How to solve small sets of linear equations with the graphical method and Cramer s rule Gauss Elimination Understanding how to
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationNumerical Analysis FMN011
Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 4 Linear Systems Ax = b A is n n matrix, b is given n-vector, x is unknown solution n-vector. A n n is non-singular
More information(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by
1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How
More informationLinear Algebraic Equations
Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationComputational Foundations of Cognitive Science
Computational Foundations of Cognitive Science Lecture 11: Matrices in Matlab Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationChapter 12: Iterative Methods
ES 40: Scientific and Engineering Computation. Uchechukwu Ofoegbu Temple University Chapter : Iterative Methods ES 40: Scientific and Engineering Computation. Gauss-Seidel Method The Gauss-Seidel method
More informationGauss-Seidel method. Dr. Motilal Panigrahi. Dr. Motilal Panigrahi, Nirma University
Gauss-Seidel method Dr. Motilal Panigrahi Solving system of linear equations We discussed Gaussian elimination with partial pivoting Gaussian elimination was an exact method or closed method Now we will
More informationPARTIAL DIFFERENTIAL EQUATIONS
MATHEMATICAL METHODS PARTIAL DIFFERENTIAL EQUATIONS I YEAR B.Tech By Mr. Y. Prabhaker Reddy Asst. Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad. SYLLABUS OF MATHEMATICAL
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of
More informationConvergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter
Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter Quarter 2013 March 20, 2013 Joshua Zorn jezorn@ucdavis.edu
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationIntroduction Linear system Nonlinear equation Interpolation
Interpolation Interpolation is the process of estimating an intermediate value from a set of discrete or tabulated values. Suppose we have the following tabulated values: y y 0 y 1 y 2?? y 3 y 4 y 5 x
More informationTheory of Iterative Methods
Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationLINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,
LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically
More informationIntroduction to Scientific Computing
(Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationMATHEMATICAL METHODS INTERPOLATION
MATHEMATICAL METHODS INTERPOLATION I YEAR BTech By Mr Y Prabhaker Reddy Asst Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad SYLLABUS OF MATHEMATICAL METHODS (as per JNTU
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be
More informationSome MATLAB Programs and Functions. Bisection Method
Some MATLAB Programs and Functions By Dr. Huda Alsaud Bisection Method %Computes approximate solution of f(x)=0 %Input: function handle f; a,b such that f(a)*f(b)
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationComputational Economics and Finance
Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationExample: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods
Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit
More informationDM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini
DM559 Linear and Integer Programming Lecture Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. Outline 1. 3 A Motivating Example You are organizing
More informationIllustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44
Illustration of Gaussian elimination to find LU factorization. A = a 21 a a a a 31 a 32 a a a 41 a 42 a 43 a 1 Compute multipliers : Eliminate entries in first column: m i1 = a i1 a 11, i = 2, 3, 4 ith
More informationConventional Matrix Operations
Overview: Graphs & Linear Algebra Peter M. Kogge Material based heavily on the Class Book Graph Theory with Applications by Deo and Graphs in the Language of Linear Algebra: Applications, Software, and
More informationSolution of Linear systems
Solution of Linear systems Direct Methods Indirect Methods -Elimination Methods -Inverse of a matrix -Cramer s Rule -LU Decomposition Iterative Methods 2 A x = y Works better for coefficient matrices with
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More information5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns
5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationA LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin
A LINEAR SYSTEMS OF EQUATIONS By : Dewi Rachmatin Back Substitution We will now develop the backsubstitution algorithm, which is useful for solving a linear system of equations that has an upper-triangular
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationLecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying
Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying Objectives Understanding forward elimination and back substitution in Gaussian elimination method Understanding the concept
More informationLecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)
1 Introduction Lecture 12 Linear systems of equations II We have looked at Gauss-Jordan elimination and Gaussian elimination as ways to solve a linear system Ax=b. We now turn to the LU decomposition,
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationTMA4125 Matematikk 4N Spring 2017
Norwegian University of Science and Technology Institutt for matematiske fag TMA15 Matematikk N Spring 17 Solutions to exercise set 1 1 We begin by writing the system as the augmented matrix.139.38.3 6.
More informationLinear Algebra Math 221
Linear Algebra Math Open Book Exam Open Notes 8 Oct, 004 Calculators Permitted Show all work (except #4). (0 pts) Let A = 3 a) (0 pts) Compute det(a) by Gaussian Elimination. 3 3 swap(i)&(ii) (iii) (iii)+(
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More informationHomework 5 - Solutions
Spring Math 54 Homework 5 - Solutions BF 3.4.4. d. The spline interpolation routine below produces the following coefficients: i a i b i c i d i -..869948.75637848.656598 -.5.9589.487644.9847639.887.9863.34456976.489747
More informationIterative techniques in matrix algebra
Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationMatrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1
Matrix notation A nm : n m : size of the matrix m : no of columns, n: no of rows Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 n = m square matrix Symmetric matrix Upper triangular matrix: matrix
More informationNumerical Methods Process Systems Engineering ITERATIVE METHODS. Numerical methods in chemical engineering Edwin Zondervan
IERAIVE MEHODS Numerical methods in chemical engineering Edwin Zondervan 1 OVERVIEW Iterative methods for large systems of equations We will solve Laplace s equation for steady state heat conduction LAPLACE
More informationMATH 1372, SECTION 33, MIDTERM 3 REVIEW ANSWERS
MATH 1372, SECTION 33, MIDTERM 3 REVIEW ANSWERS 1. We have one theorem whose conclusion says an alternating series converges. We have another theorem whose conclusion says an alternating series diverges.
More informationGauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan
Gauss Elimination Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan chanhl@mail.cgu.edu.tw Solving small numbers of equations by graphical method The location of the intercept provides
More informationMA3232 Numerical Analysis Week 9. James Cooley (1926-)
MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier
More informationAPPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF
ELEMENTARY LINEAR ALGEBRA WORKBOOK/FOR USE WITH RON LARSON S TEXTBOOK ELEMENTARY LINEAR ALGEBRA CREATED BY SHANNON MARTIN MYERS APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF When you are done
More informationMath 1314 Week #14 Notes
Math 3 Week # Notes Section 5.: A system of equations consists of two or more equations. A solution to a system of equations is a point that satisfies all the equations in the system. In this chapter,
More informationFinite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.
Finite Mathematics Chapter 2 Section 2.1 Systems of Linear Equations: An Introduction Systems of Equations Recall that a system of two linear equations in two variables may be written in the general form
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationFINAL (CHAPTERS 7-9) MATH 141 SPRING 2018 KUNIYUKI 250 POINTS TOTAL
Math 141 Name: FINAL (CHAPTERS 7-9) MATH 141 SPRING 2018 KUNIYUKI 250 POINTS TOTAL Show all work, simplify as appropriate, and use good form and procedure (as in class). Box in your final answers! No notes
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More informationCompanion. Jeffrey E. Jones
MATLAB7 Companion 1O11OO1O1O1OOOO1O1OO1111O1O1OO 1O1O1OO1OO1O11OOO1O111O1O1O1O1 O11O1O1O11O1O1O1O1OO1O11O1O1O1 O1O1O1111O11O1O1OO1O1O1O1OOOOO O1111O1O1O1O1O1O1OO1OO1OO1OOO1 O1O11111O1O1O1O1O Jeffrey E.
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationNumerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point
Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error In this method we assume initial value of x, and substitute in the equation. Then modify x and continue till we
More informationThe following steps will help you to record your work and save and submit it successfully.
MATH 22AL Lab # 4 1 Objectives In this LAB you will explore the following topics using MATLAB. Properties of invertible matrices. Inverse of a Matrix Explore LU Factorization 2 Recording and submitting
More informationMath 1080: Numerical Linear Algebra Chapter 4, Iterative Methods
Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods
More informationNumerical Methods for Chemical Engineers
Numerical Methods for Chemical Engineers Chapter 3: System of Linear Algebraic Equation Morteza Esfandyari Email: Esfandyari.morteza@yahoo.com Mesfandyari.mihanblog.com Page 4-1 System of Linear Algebraic
More information