SyDe312 (Winter 2005) Unit 1 - Solutions (continued)

Size: px
Start display at page:

Download "SyDe312 (Winter 2005) Unit 1 - Solutions (continued)"

Transcription

1 SyDe3 (Winter 5) Unit - Solutions (continued) March, 5 Chapter 6 - Linear Systems Problem b Iterative solution by the Jacobi and Gauss-Seidel iteration methods: Given: b = [ 77] T, x = [ ] T 9x + x + x 3 = x + x + 3x 3 = 7 3x + 4x + x 3 = 7 x = 9 ( x x 3 ) x = ( 7 x 3x 3 ) x 3 = (7 3x 4x ) Jacobi method: x (k+) = 9 x (k+) = x (k+) 3 = ( x(k) x (k) 3 ) ( 7 x(k) 3x (k) 3 ) (7 3x(k) 4x (k) ) Gauss-Seidel method: x (k+) = 9 ( x(k) x (k) x (k+) = x (k+) 3 = 3 ) ( 7 x(k+) 3x (k) 3 ) (7 3x(k+) 4x (k+) )

2 For example beginiing with the initial x = [,, ], in the Jacobi method, we get: x () = 9 ( ) = x () = x () 3 = ( 7 3 ) = 7 (7 3 4 ) = 7 and with the Gauss-Seidel method: ( ) = x () = 9 x () = x () 3 = ( 7 3 ) = 7 ( ) = 98 Iterating these procedure until x x k.5, we get: k x k x k x k 3 Error Ratio.E E E E E E E E E E E E E E E E E E E E-5.35 Table : Problem 6.6.b Jacobi Iteration k x k x k x k 3 Error Ratio.E E- 3.E- -.E E-.3E E E-3 8.8E E E E E E-5 5.7E- Table : Problem 6.6.b Gauss-Seidel Iteration

3 Problem In case a, for the matrix converge. [ 4 4 ], in which the M >, the iteration methods do not k x k x k Table 3: Problem 6.6.3a Jacobi Iteration ], which is diagonally dominant, both methods converge. k x k x k E E E E 7.33E+ -.93E Table 4: Problem 6.6.3a Gauss-Seidel Iteration So the Gauss-Seidel method diverges more rapidly. In case b, for the matrix [ 4 4 So Gauss-Seidel method converges faster. 3

4 Problem A = k x k x k Error E E E E-6 Table 5: Problem 6.6.3b Jacobi Iteration k x k x k Error E E E E E-6 Table 6: Problem 6.6.3b Gauss-Seidel Iteration b = x = [The following solutions use the notation of the textbook for the iteration matrices etc. You can easily translate to relate these to the matrices defined in the lecture notes.] Recall that in Jacobi: N = a... a ann P = N A and In Gauss-Seidel: 4

5 N = a... a a an an... ann P = N A and M = N P k x k x k x k 3 x k 4 Error Ratio E E E E E E E E E E E E E E E-5.5 Table 7: Problem Jacobi Iteration M = 4 M =.5. This is consistent with the column ratio in the above table. Problem 6.6-4b M = 4 /4 /4 /6 /6 /4 /6 /6 /4 /3 /3 /8 5 M =.5.

6 k x k x k x k 3 x k 4 Error Ratio E- -.44E E E- -.E E E- -.3E E E-3 -.E E E-3 -.E E E-4 -.E E E-4 -.E E E-5 -.E E E-6 -.E+..34E-5.5 Table 8: Problem Gauss-Seidel Iteration The actual convergence rate is better than.5. upper bound on the convergence or divergence rate. Keep in mind that M gives an Problem x x x 3 b b b a 4 3 = 3 since det(a), for any right-hand side vector b, the system has a unique solution x = [x, x, x 3 ] T b ()Jacobi method: N = 4 3 N = /4 / /3 P = 6

7 M = N P = /4 /4 / /3 M = /3 Since M <, the method converges. () Gauss-Seidel method: N = 4 3 N = /4 /8 / / /3 /3 P = M = N P = /4 /4 /8 /8 / / M = / Since M <, the method converges. Problem Consider A n = The iteration method Nx (k+) = b+p x (k) will converge, for all b vectors and all initial guesses x (), if and only if all eigenvalues λ of M = N P satisfy λ < 7

8 () Jacobi method n=5, A=[,-,,,;-,,-,,;,-,,-,;,,-,,-;,,,-,]; for i=:n for j=:n N_Jacobi(i,j)=(i==j)*A(i,j); end end P=N_Jacobi-A; M=inv(N_Jacobi)*P; lambda=eig(m) A 5 = N = P = M = N P = N = The eigenvalues of M are - using eig(m): 8

9 λ =.866 λ =.5 λ 3 = λ 4 =.5 λ 5 =.866 Likewise for n =, the eigenvalues of M are: λ =.9595 λ =.843 λ 3 =.6549 λ 4 =.454 λ 5 =.43 λ 6 =.43 λ 7 =.454 λ 8 =.6549 λ 9 =.843 λ =.9595 For n =, the eigenvalues of M are: λ =.9888 λ =.9556 λ 3 =.9 λ 4 =.86 λ 5 =.733 λ 6 =.635 λ 7 =.5 λ 8 =.3653 λ 9 =.5 λ =.747 λ =.747 λ =.5 λ 3 =.3653 λ 4 =.5 λ 5 =.635 λ 6 =.733 λ 7 =.86 λ 8 =.9 λ 9 =.9556 λ =.9888 We see that for n = 5,,, λ i <, so Jacobi method will converge. () Gauss-Seidel method Again for n = 5, we have: for i=:n for j=:n N_GS(i,j)=(i>=j)*A(i,j); end end P=N_GS-A; M=inv(Ngs)*P; lambda=eig(m) M = N P = The eigenvalues of M are - using eig(m): λ = λ =.75 λ 3 =.5 λ 4 = λ 5 = 9

10 Likewise for n =, the eigenvalues of M are: λ = λ =.96 λ 3 =.777 λ 4 =.488 λ 5 =.76 λ 6 =.3 λ 7 = λ 8 = λ 9 = λ = For n =, the eigenvalues of M are: λ = λ =.9778 λ 3 =.93 λ 4 =.87 λ 5 =.687 λ 6 =.5374 λ 7 =.3887 λ 8 =.5 λ 9 =.335 λ =.495 λ =.68 λ = i λ 3 =.5.46i λ 4 =.8 +.7i λ 5 =.8.7i λ 6 = i λ 7 =.43.6i λ 8 = i λ 9 = i λ = So for n = 5,,, λ i <, so Gauss-Seidel method will converge. Problem Note: this is slightly different from the method described in the lecture notes, but equivalent. Careful with the sign of the correction vector. The residual correction method: A ɛ x = b where A ɛ = A + ɛb. A = B = Let N = A and P = N A ɛ = ɛb, so M = N P = A ( ɛb) M = A ( ɛb) = ɛ. A B) = ɛ /5.5 = ɛ. = ɛ In order to ensure the convergence of the residual correction method, α <.

11 k x k x k x k 3 Error Ratio.e+.e+ 9.e-.e- 9.99e-.e+.e+.e+.e-.e e-.e+.5e+ 5.e-4 5.e- 4.e e-.e+ 5.e-5.e- 5.e+.e+.e+.5e-6 5.e- Table 9: Problem ɛ =. In this case, we are not given the vector b, but the true answer x = [,, ]. Let x = [,, ] and repeat the procedure until x x (k) 5 k x k x k x k 3 Error Ratio.e+.e+ 8.e-.e-.6667e-.e+.4e+.e+ 4.e-.e e-.e+.4e+ 4.e-3.e- 4.e+ 9.99e-.e+ 8.e-4.e- 5.e+.e e- 8.e-5.e- 6.e+.e+.e+.6e-5.e- 7.e+.e+.e+.6e-6.e- Table : Problem ɛ =. The number of iterations increases as ɛ increases, consistent with expectations. Problem 7. - b [ ] 36 For, 36 3 f(λ) = λ 5λ 5 = (λ 5)(λ + 5), λ = 5, λ = 5 Using (A λi)v =, we get : An eigenvector corresponding to λ = 5: v = [, 5] T

12 k x k x k x k 3 Error Ratio.5e+.e+ 5.e- 5.e e-.e+.5e+.e+.5e- 5.e e-.e+.65e+ 6.5e-.5e- 4.e e-.e+ 3.5e- 5.e- 5.78e+.e+ 9.99e- 7.85e-3.5e- 6.e+.39e+.e e-3 5.e e-.e+.e e-4.5e- 8.e e-.e e-4 5.e- 9.e+.e e-.7e-4.5e-.e+.e+.e+ 6.35e-5 5.e e-.e+.e+.559e-5.5e-.e e-.e e-6 5.e- Table : Problem ɛ =.5 k x k x k x k 3 Error Ratio.8E+.E+.E- 8.E- 4.44E-.E+.64E+.E+ 6.4E- 8.E E-.E+.6E+.56E- 4.E- 4.E+ 7.95E-.E+.5E- 8.E- 5.8E+.E+ 9.8E- 8.9E- 4.E- 6.E+.7E+.E+ 6.55E- 8.E E-.E+.3E+.6E- 4.E- 8.E+ 9.79E-.E+.E- 8.E- 9.E+.E+ 9.9E- 8.39E-3 4.E-.E+.E+.E+ 6.7E-3 8.E- 9.97E-.E+.E+.68E-3 4.E-.E+ 9.98E-.E+.5E-3 8.E- 3.E+.E+ 9.99E- 8.59E-4 4.E- 4.E+.E+.E+ 6.87E-4 8.E- 5.E+.E+.E+.75E-4 4.E- 6.E+.E+.E+.E-4 8.E- 7.E+.E+.E+ 8.8E-5 4.E- 8.E+.E+.E+ 7.4E-5 8.E- 9.E+.E+.E+.8E-5 4.E-.E+.E+.E+.5E-5 8.E-.E+.E+.E+ 9.E-6 4.E- Table : Problem ɛ =.8

13 An eigenvector corresponding to λ = 5: v = [, ] T Problem 7. - For the symmetric matrix A = , eigenvalues and eigenvectors are: λ = { 36, 3, 9} v () = [ 3, 3, 3 ] T, v () = [,, ] T, v (3) = [ 6, 6, ] T,, U can be: U = [v () ; v () ; v (3) ] = Each column in U is one of the eigenvectors of A. The eigenvectors are normalized to get the columns of U which is supposed to be an orthogonal matrix. We get: U T AU = D = diag(λ, λ, λ 3 ) and U T U = UU T = I. A=[-7,3,-6; 3, -, 3; -6, 3, -7]; [U,lam]=eig(A) The result might be: λ = , U = Run the following command, in order to see: = [v () ; v (3) ; v () ] U *A*U

14 and also the commands U *U and U*U, in order to see U T U = UU T = I. Problem [ ] For, eigenvalues and eigenvectors are: f(λ) = λ λ + = (λ ) : λ =, v () = [, ] T λ =, v () = [, ] T For [ ], e-values and e-vectors are: f(λ) = λ λ + = (λ ) : λ =, v () = [, ] T In case b, this eigenvector and its multiples are the only eigenvectors of the matrix. It does NOT contradict Theorem 7..4, because the matrix is not symmetric. 4

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Chapter 12: Iterative Methods

Chapter 12: Iterative Methods ES 40: Scientific and Engineering Computation. Uchechukwu Ofoegbu Temple University Chapter : Iterative Methods ES 40: Scientific and Engineering Computation. Gauss-Seidel Method The Gauss-Seidel method

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3 Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. An n n matrix

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

THE MATRIX EIGENVALUE PROBLEM

THE MATRIX EIGENVALUE PROBLEM THE MATRIX EIGENVALUE PROBLEM Find scalars λ and vectors x 0forwhich Ax = λx The form of the matrix affects the way in which we solve this problem, and we also have variety as to what is to be found. A

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

Motivation: Sparse matrices and numerical PDE's

Motivation: Sparse matrices and numerical PDE's Lecture 20: Numerical Linear Algebra #4 Iterative methods and Eigenproblems Outline 1) Motivation: beyond LU for Ax=b A little PDE's and sparse matrices A) Temperature Equation B) Poisson Equation 2) Splitting

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Poisson Equation in 2D

Poisson Equation in 2D A Parallel Strategy Department of Mathematics and Statistics McMaster University March 31, 2010 Outline Introduction 1 Introduction Motivation Discretization Iterative Methods 2 Additive Schwarz Method

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Linear algebra II Homework #1 due Thursday, Feb A =

Linear algebra II Homework #1 due Thursday, Feb A = Homework #1 due Thursday, Feb. 1 1. Find the eigenvalues and the eigenvectors of the matrix [ ] 3 2 A =. 1 6 2. Find the eigenvalues and the eigenvectors of the matrix 3 2 2 A = 2 3 2. 2 2 1 3. The following

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

201B, Winter 11, Professor John Hunter Homework 9 Solutions

201B, Winter 11, Professor John Hunter Homework 9 Solutions 2B, Winter, Professor John Hunter Homework 9 Solutions. If A : H H is a bounded, self-adjoint linear operator, show that A n = A n for every n N. (You can use the results proved in class.) Proof. Let A

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Solutions Problem Set 8 Math 240, Fall

Solutions Problem Set 8 Math 240, Fall Solutions Problem Set 8 Math 240, Fall 2012 5.6 T/F.2. True. If A is upper or lower diagonal, to make det(a λi) 0, we need product of the main diagonal elements of A λi to be 0, which means λ is one of

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently. Homework #1 due Thursday, Feb. 2 1. Find the eigenvalues and the eigenvectors of the matrix [ ] 4 6 A =. 2 5 2. Find the eigenvalues and the eigenvectors of the matrix 3 3 1 A = 1 1 1. 3 9 5 3. The following

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Final Exam - Take Home Portion Math 211, Summer 2017

Final Exam - Take Home Portion Math 211, Summer 2017 Final Exam - Take Home Portion Math 2, Summer 207 Name: Directions: Complete a total of 5 problems. Problem must be completed. The remaining problems are categorized in four groups. Select one problem

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points. 86 Problem Set 8 Solution Due Wednesday, April 9 at 4 pm in -6 Total: 6 points Problem : If A is real-symmetric, it has real eigenvalues What can you say about the eigenvalues if A is real and anti-symmetric

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method 632 CHAP 11 EIGENVALUES AND EIGENVECTORS QR Method Suppose that A is a real symmetric matrix In the preceding section we saw how Householder s method is used to construct a similar tridiagonal matrix The

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x!

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x! The residual again The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute r = b - A x! which gives us a measure of how good or bad x! is as a

More information

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Section 4.1 The Power Method

Section 4.1 The Power Method Section 4.1 The Power Method Key terms Dominant eigenvalue Eigenpair Infinity norm and 2-norm Power Method Scaled Power Method Power Method for symmetric matrices The notation used varies a bit from that

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Chapter 10 Exercises

Chapter 10 Exercises Chapter 10 Exercises From: Finite Difference Methods for Ordinary and Partial Differential Equations by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/ rl/fdmbook Exercise 10.1 (One-sided and

More information

Linear algebra & Numerical Analysis

Linear algebra & Numerical Analysis Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric

More information

Linear Equation Systems Direct Methods

Linear Equation Systems Direct Methods Linear Equation Systems Direct Methods Content Solution of Systems of Linear Equations Systems of Equations Inverse of a Matrix Cramer s Rule Gauss Jordan Method Montante Method Solution of Systems of

More information

Introduction to Scientific Computing II Multigrid

Introduction to Scientific Computing II Multigrid Introduction to Scientific Computing II Multigrid Miriam Mehl Slide 5: Relaxation Methods Properties convergence depends on method clear, see exercises and 3), frequency of the error remember eigenvectors

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Solution of Matrix Eigenvalue Problem

Solution of Matrix Eigenvalue Problem Outlines October 12, 2004 Outlines Part I: Review of Previous Lecture Part II: Review of Previous Lecture Outlines Part I: Review of Previous Lecture Part II: Standard Matrix Eigenvalue Problem Other Forms

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

Diagonalization. Hung-yi Lee

Diagonalization. Hung-yi Lee Diagonalization Hung-yi Lee Review If Av = λv (v is a vector, λ is a scalar) v is an eigenvector of A excluding zero vector λ is an eigenvalue of A that corresponds to v Eigenvectors corresponding to λ

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

7.3 The Jacobi and Gauss-Seidel Iterative Methods

7.3 The Jacobi and Gauss-Seidel Iterative Methods 7.3 The Jacobi and Gauss-Seidel Iterative Methods 1 The Jacobi Method Two assumptions made on Jacobi Method: 1.The system given by aa 11 xx 1 + aa 12 xx 2 + aa 1nn xx nn = bb 1 aa 21 xx 1 + aa 22 xx 2

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let

More information

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 Answer the questions in the spaces provided on the question sheets. You must show your work to get credit for your answers. There will

More information

9.1 Mean and Gaussian Curvatures of Surfaces

9.1 Mean and Gaussian Curvatures of Surfaces Chapter 9 Gauss Map II 9.1 Mean and Gaussian Curvatures of Surfaces in R 3 We ll assume that the curves are in R 3 unless otherwise noted. We start off by quoting the following useful theorem about self

More information

Linear algebra II Tutorial solutions #1 A = x 1

Linear algebra II Tutorial solutions #1 A = x 1 Linear algebra II Tutorial solutions #. Find the eigenvalues and the eigenvectors of the matrix [ ] 5 2 A =. 4 3 Since tra = 8 and deta = 5 8 = 7, the characteristic polynomial is f(λ) = λ 2 (tra)λ+deta

More information

Linear algebra and differential equations (Math 54): Lecture 13

Linear algebra and differential equations (Math 54): Lecture 13 Linear algebra and differential equations (Math 54): Lecture 13 Vivek Shende March 3, 016 Hello and welcome to class! Last time We discussed change of basis. This time We will introduce the notions of

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information