Homework 5 MA/CS 375, Fall 2005 Due December 1

Size: px
Start display at page:

Download "Homework 5 MA/CS 375, Fall 2005 Due December 1"

Transcription

1 Homework 5 MA/CS 375, Fall 25 Due December (2 pts) In this problem we explore the volume of a random parallelepiped in 4-dimensional space whose sides are unit vectors and its connection to the condition number Using a well-known result of analytic geometry, the volume of a parallelepiped is found as the determinant of the matrix formed by the vectors describing all of its sides originating at a given (any) vertex Your program should: (a) Generate 4 random column vectors of size 4 (b) Normalize these veectors so their length is unity (c) Compute the volume of the parallelepiped they define You must use the LU factorization for computing the determinant of the matrix A whose columns are the 4 random vectors (that is DO NOT use the command det(a) here!) (d) compute the condition number of A, κ(a) (e) Repeat the process times and produce a scatter plot of det(a) vs κ(a) What are the maximum and minimum values of det(a)? What is the condition number for these values? How can you explain the extreme values of the condition number and the corresponding values of the determinant? Solutions: (a) >>A=rand(4); (b) >>A=A*diag(/sum(A)) A = A =

2 (c) [L,U]=lu(A); vol=abs(prod(diag(u))); L = (d) >>cond(a), U = vol = 662 κ(a) = (e) >> d=zeros(,); kappa=d; >> for j=: A=rand(4); A=A*diag(/sqrt(sum(A^2))); [L,U]=lu(A); d(j)=prod(diag(u)); kappa(j)=cond(a); end >>loglog(abs(d),kappa, ) >>[dmin,mindex]=min(abs(d)); kappamin=kappa(mindex); >>[dmax,maxdex]=max(abs(d)); kappamax=kappa(maxdex); The minimum determinant was 8269e-5 and the corresponding condition number was 37498e+3 The maximum determinant was 4247 and the corresponding condition number was 3628 /κ(a) det(a) Using polyfit to find the regression line, we get that log κ A = log( det(a) )

3 such that κ A det(a) number becomes large Therefore, when the determinant becomes small, the condition 2 (2 pts) Given an n-vector, x, of distinct elements, x i x j for i j, the Cauchy matrix, C(x), is the n n matrix whose ijth entry is: c ij = x i + x j Write a program which generates Cauchy matrices of size n = [ ] with random input vectors Compute the condition numbers of these matrices In each case set z = ones(n, ) and compute b = Cz Use the built-in Matlab functions to solve Cz = b Denote the computed solution by ẑ Compute the relative errors and the relative residuals: z ẑ z, b Cẑ b Comment on the results Can you make sense of them given the condition numbers you found? Solution: function C=cauchymatrix(N) x=rand(n,); C=/(repmat(x,,N)+repmat(x,N,)); The condition numbers of the randomly generated Cauchy matrices are >>C4=cauchymatrix(4); C8=cauchymatrix(8); C2=cauchymatrix(2); C6=cauchymatrix(6); >>K=[cond(C4) cond(c8) cond(c2) cond(c6)] K = 32794e e e e+8 The exact solutions and right-hand-sides are constructed using >>z4=ones(4,); b4=c4*z4; z4h=c4\b4; >>z8=ones(8,); b8=c8*z8; z8h=c8\b8; >>z2=ones(2,); b2=c2*z2; z2h=c2\b2; >>z6=ones(6,); b6=c6*z6; z6h=c6\b6; 3

4 Matlab returns singular matrix warnings for the 2 2 and 6 6 matrices The relative errors and relative residuals are >>re4=norm(z4-z4h)/norm(z4); rr4=norm(b4-c4*z4h)/norm(b4); >>re8=norm(z8-z8h)/norm(z8); rr8=norm(b8-c8*z8h)/norm(b8); >>re2=norm(z2-z2h)/norm(z2); rr2=norm(b2-c2*z2h)/norm(b2); >>re6=norm(z6-z6h)/norm(z6); rr6=norm(b6-c6*z6h)/norm(b6); >>RE=[re4 re8 re2 re6] RE = 976e >>RR=[rr4 rr8 rr2 rr6] RR = 233e-6 226e-6 497e e-6 The relative residual is O(ɛ m ) is bounded above by κ(a) ɛ m >> eps*k/re ans = Therefore, for a given machine epsilon, one can determine the worst possible relative error in solving the linear system by using the matrix condition number as a bound 3 (3 pts) In this problem we solve the heat equation using a second-order accurate method for timestepping Consider the heat equation u t = 2 u x 2 for x and t T with boundary conditions u(x =, t) = u(x =, t) =, 4

5 and initial condition u(x, t = ) = sin πrx, x The Crank-Nicolson scheme is a popular scheme for solving the heat equation because it is accurate of order 2 both in the timestep, dt := k and in the spatial discretization interval dx := h It is defined by the finite difference equations: u n+ m u n m k = u n+ m+ 2u n+ m + u n+ m + u n m+ 2u n m + u n m, n =,, N, m =,, M 2 h 2 2 h 2 where we have introduced the discretization: u n m := u(x m, t n ), x m = + h m, m =,, M ; t n = k n, n =,, N with h = 2/M and k = T/N The reason for the increase in accuracy is that the above expressions give the derivatives of the function u(x, t) with respect to x and t at the point x m and the instant t = (t n + t n+ )/2 so we can think of the time derivative formula as a centered (rather than a forward) difference You must demonstrate the accuracy of the scheme by carrying out computations for different timesteps k, and comparing your results with the exact solution given by the expression: Specifically: (a) Write the system of equations in the form where u exact (x, t) = e π2 r 2t sin πrx (I δd) u n+ = (I + δd) u n, n =,, N δ = (b) Start the computation off by setting k 2h 2, un = (u n,, u n M ) T (u n = u n M = ) u m = sin πkx m = sin πk( + h m), m =,, M (c) perform the LU factorization of the left-hand matrix LU = D := (I δd) You must define D (and D + ) using the command spdiags to take advantage of its sparsity when solving this system (d) Iterate until the final time, t = T is reached (e) Run with T = and r = Use M = 2, 4, 8, 6 and N =, 2, 4, 8 5

6 (f) Compute the relative error e(m, N) = M m= un m u exact (x m, T ) M m= u exact(x m, T ) for each pair (M, N) and show its value in a table (g) Determine which pairs work best together; that is determine, for each value of M (and gridspacing h), which value of N (and timestep k) gives a time-discretization error that is of the same order as the space-discretization error That is, find the value of N beyond which there is no appreciable improvement in accuracy For these optimal (M, N) pairs, show that the error decreases like either k 2 or like h 2 (h) For M = 6 and the corresponding optimal value for N, solve the heat equation and plot the solutions for r =, 2, 3; for each value of r show on the same plot solutions for 5 equally spaced time instants (your plots must extend over the entire computation interval, including the endpoints) How do the solutions behave as r increases? Solutions: In matrix form, the equations to solve at every time step + 2δ δ 2δ δ δ + 2δ δ δ 2δ δ δ + 2δ u n+ = δ 2δ δ δ δ + 2δ δ 2δ u n The code for the entire simulation is function err=heateq(m,n,t,r) % Solve the Heat Equation using 2nd order finite differences for % spatial discretization and Crank-Nicholson for time stepping M=M+; % # of grid points = # of intervals + x=linspace(-,,m) ; % Set up grid points h=x(2)-x(); % Grid spacing k=t/n; % Time step size del=k/(2*h^2); % Crank-Nicholson parameter 6

7 in=2:m; m=m-; u=sin(pi*r*x); e=ones(m-,); % indices of interior points % Get initial condition % Column vector of all ones % Left hand side matrix A=spdiags([-del*e, (+2*del)*e, -del*e], -:, m, m); [L,U]=trilu(A); coeff=-2*del; for n=:n % Do LU factorization % Lump diagonal coefficient for RHS % Time stepping loop end f=coeff*u+del*([;u(:m)]+[u(2:m);]); %RHS u(in)=u\(l\f(in)); exact=exp(-(pi*r)^2*t)*sin(pi*r*x); err=norm(u-exact,)/norm(exact,); function [L,U] = trilu(a); n=length(a); am=diag(a,-); a=diag(a); ap=diag(a,); for k = :(n-) am(k)=am(k)/a(k); k=k+; a(k)=a(k)-am(k)*ap(k); end L=speye(n)+spdiags(am,-,n,n); U=spdiags([a [;ap]],:,n,n); The table of results for the specified M,N pairs is If we look at the error over a wider range of interval M N e- 47e- 328e-2 78e e- 72e- 295e e e- 8335e- 4437e e e- 8665e- 4854e-2 2e-2 size an time step, the trend is more obvious 7

8 log (N) log (err(m,n)) log (M) The quadratic convergence in spatial discretization can be observed by fixing one of either the time step or grid spacing to a sufficiently small amount and varying the other In the plot on the left, the number of subintervals was taken to be, and the time step was varied In the plot on the right, the number of time steps was taken to be, and the mesh size was varied 8

9 For a fixed number of intervals equal to 6, the optimal number of time steps is 25 T=5, r= 2 T=5, r=2 2 T=5, r= T=, r= T=, r= x 4 T=, r= T=5, r= x 3 T=5, r= x 6 T=5, r= T=2, r= x 4 T=2, r= x 8 T=2, r= T=25, r= x 4 T=25, r= x T=25, r= Looking at the computed solution for various times and choices of the inital condition paremeter r, we can see that greater values of r result in more oscillations over the interval and greater decay rate Both of these will lead to requiring additional spatial and temporal resolution to maintain the error to a specified tolerance 4 (3 pts) The equilibrium temperature distribution in a slab of reacting material can under some 9

10 assumptions be modeled by the nonlinear boundary value problem: d 2 u dx 2 + λ2 e 2u =, x (, ), with boundary conditions u( ) = u() = The parameter λ is sometimes called the Damköhler number For this problem take λ = cosh i Verify that: solves the problem u(x) = ln (λ cosh x), ii Introducing a uniform grid, x j = + jh, j =,, n, h = 2, approximate u(x n j) by U j and approximate d2 u by the second order central difference formula This leads to the system of dx 2 n nonlinear equations: F j (u) U j 2U j + U j+ + λ 2 e 2U j =, j =,, n, h 2 where we set U = U n = The Jacobian derivative of a collection of n functions, F j of n variables, U, is the (n ) (n ) matrix J(U) whose jkth entry is the derivative of F j with respect to U k, J jk = F j U k (We denote the n -vectors whose entries are F j and U k by F and U) Show that the Jacobian derivative of the function given above is a tridiagonal matrix whose entries are: J j,j+ = J j+,j = h, J 2 jj = 2 h + 2 2λ2 e 2U j iii Newton s method for solving a system of n equations in n unknowns is the direct generalization of Newton s method for a single equation which we studied in Chapter 2 Precisely, given an initial approximation, U (), we generate subsequent approximations by solving: and setting: J(U (k) ) d (k) = F(U (k) ), U (k+) = U (k) + d (k) Write a script implementing Newton s method for the difference approximation described above Terminate the iteration when: d (k) 8 Use U () = Be sure to treat J as a sparse matrix when solving Record the number of iterations required and print out d (k) for each k Does the convergence appear to be quadratic? Carry out the computations for n =, 2, 4 Compute the maximum error, that is the maximum absolute difference between U j and u(x j ), in each case Do you observe second order convergence with decreasing h? Plot the solutions in each case

11 Solution: du dx = tanh(x), d 2 u dx = 2 tanh2 (x) ( ) exp(2u) = exp( 2 ln[λ cosh(x)]) = exp ln[ λ 2 cosh 2 (x) ] = λ 2 cosh 2 (x) = λ 2 sech2 (x) exp(2u) = λ 2 [ tanh2 (x)] = λ 2 d 2 u dx 2 Solutions: Given F j (u) U j 2U j + U j+ + λ 2 e 2U j =, j =,, n, h 2 We obtain the Jacobian by differentiating with respect to the each of the elements of the vector u J jk F j (u) U k = U k ( Uj 2U j + U j+ + λ 2 e 2U j h 2 J jk = δ (j ),k 2δ jk + δ (j+),k h 2 ), + 2λ 2 e 2U j δ jk U j U k = δ jk Clearly, nonzero entries can only occur in the range j k j + Evaluating for the Kronecker delta functions gives the desired results The script for this problem is function [iter,nd,err]=nlbvp(n,tol) J j,j+ = J j+,j = h 2, J jj = 2 h 2 + 2λ2 e 2U j np=n+; nm=n-; in=2:n; x=linspace(-,,np) ; h=x(2)-x(); h2=h*h; lambda=sech(); lambda2=lambda*lambda; e=ones(nm,); D2bands=[e -2*e e]/h2; D2=spdiags(D2bands, -:, nm, nm); exact=-log(lambda*cosh(x)); uin=*e; % Exact solution % Interior points (initial value) d=e; nd=norm(d); iter=;

12 while nd>tol nonlin=lambda2*exp(2*uin); % Nonlinear term F=trimult(D2bands,uin)+nonlin; % Forcing function J=D2+2*spdiags(nonlin,,nm,nm); % Jacobian d=j\(-f); % Change term nd=norm(d); uin=uin+d; % Update solution iter=iter+; end u=[;uin;]; err=norm(exact-u,inf); function Ax=trimult(A,x); N=length(x); k=2:n; Ax=A(:,2)*x+[;A(k,)*x(k-)]+[A(k-,3)*x(k);]; For n = 8 we obtain the norm of the change with iterations as The convergence with respect to grid k d (k) e e- refinement also appears to be quadratic n iterations d error 6 589e- 7784e e-2 732e e e e- 696e-4 2

13 45 n= 45 n= u(x) 25 2 u(x) exact computed x x exact computed 45 n=4 45 n= u(x) 25 2 u(x) exact computed 5 5 x 5 exact computed 5 5 x 3

Homework 6 MA/CS 375, Fall 2005 Due December 15

Homework 6 MA/CS 375, Fall 2005 Due December 15 Homework 6 MA/CS 375, Fall 25 Due December 5 This homework will count as part of your grade so you must work independently. It is permissible to discuss it with your instructor, the TA, fellow students,

More information

Linear Systems of Equations. ChEn 2450

Linear Systems of Equations. ChEn 2450 Linear Systems of Equations ChEn 450 LinearSystems-directkey - August 5, 04 Example Circuit analysis (also used in heat transfer) + v _ R R4 I I I3 R R5 R3 Kirchoff s Laws give the following equations

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

AMSC/CMSC 466 Problem set 3

AMSC/CMSC 466 Problem set 3 AMSC/CMSC 466 Problem set 3 1. Problem 1 of KC, p180, parts (a), (b) and (c). Do part (a) by hand, with and without pivoting. Use MATLAB to check your answer. Use the command A\b to get the solution, and

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

Boundary Layer Problems and Applications of Spectral Methods

Boundary Layer Problems and Applications of Spectral Methods Boundary Layer Problems and Applications of Spectral Methods Sumi Oldman University of Massachusetts Dartmouth CSUMS Summer 2011 August 4, 2011 Abstract Boundary layer problems arise when thin boundary

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Math 502 Fall 2005 Solutions to Homework 3

Math 502 Fall 2005 Solutions to Homework 3 Math 502 Fall 2005 Solutions to Homework 3 (1) As shown in class, the relative distance between adjacent binary floating points numbers is 2 1 t, where t is the number of digits in the mantissa. Since

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

AM205: Assignment 2. i=1

AM205: Assignment 2. i=1 AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Practice Problems For Test 3

Practice Problems For Test 3 Practice Problems For Test 3 Power Series Preliminary Material. Find the interval of convergence of the following. Be sure to determine the convergence at the endpoints. (a) ( ) k (x ) k (x 3) k= k (b)

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

Finite Elements for Nonlinear Problems

Finite Elements for Nonlinear Problems Finite Elements for Nonlinear Problems Computer Lab 2 In this computer lab we apply finite element method to nonlinear model problems and study two of the most common techniques for solving the resulting

More information

Lecture 10: Finite Differences for ODEs & Nonlinear Equations

Lecture 10: Finite Differences for ODEs & Nonlinear Equations Lecture 10: Finite Differences for ODEs & Nonlinear Equations J.K. Ryan@tudelft.nl WI3097TU Delft Institute of Applied Mathematics Delft University of Technology 21 November 2012 () Finite Differences

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

A Hybrid Method for the Wave Equation. beilina

A Hybrid Method for the Wave Equation.   beilina A Hybrid Method for the Wave Equation http://www.math.unibas.ch/ beilina 1 The mathematical model The model problem is the wave equation 2 u t 2 = (a 2 u) + f, x Ω R 3, t > 0, (1) u(x, 0) = 0, x Ω, (2)

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1 Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two

More information

MAE 107 Homework 8 Solutions

MAE 107 Homework 8 Solutions MAE 107 Homework 8 Solutions 1. Newton s method to solve 3exp( x) = 2x starting at x 0 = 11. With chosen f(x), indicate x n, f(x n ), and f (x n ) at each step stopping at the first n such that f(x n )

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

An exploration of matrix equilibration

An exploration of matrix equilibration An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Practice Problems For Test 3

Practice Problems For Test 3 Practice Problems For Test 3 Power Series Preliminary Material. Find the interval of convergence of the following. Be sure to determine the convergence at the endpoints. (a) ( ) k (x ) k (x 3) k= k (b)

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones.

Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones. A matrix is sparse if most of its entries are zero. Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones. In fact sparse matrices

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Algebraic flux correction and its application to convection-dominated flow. Matthias Möller

Algebraic flux correction and its application to convection-dominated flow. Matthias Möller Algebraic flux correction and its application to convection-dominated flow Matthias Möller matthias.moeller@math.uni-dortmund.de Institute of Applied Mathematics (LS III) University of Dortmund, Germany

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

EE364a Review Session 7

EE364a Review Session 7 EE364a Review Session 7 EE364a Review session outline: derivatives and chain rule (Appendix A.4) numerical linear algebra (Appendix C) factor and solve method exploiting structure and sparsity 1 Derivative

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

MA3457/CS4033: Numerical Methods for Calculus and Differential Equations

MA3457/CS4033: Numerical Methods for Calculus and Differential Equations MA3457/CS4033: Numerical Methods for Calculus and Differential Equations Course Materials P A R T II B 14 2014-2015 1 2. APPROXIMATION CLASS 9 Approximation Key Idea Function approximation is closely related

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations 1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Time-dependent variational forms

Time-dependent variational forms Time-dependent variational forms Hans Petter Langtangen 1,2 1 Center for Biomedical Computing, Simula Research Laboratory 2 Department of Informatics, University of Oslo Oct 30, 2015 PRELIMINARY VERSION

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9 GENE H GOLUB 1 Error Analysis of Gaussian Elimination In this section, we will consider the case of Gaussian elimination and perform a detailed

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Finite Difference Methods for Boundary Value Problems

Finite Difference Methods for Boundary Value Problems Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point

More information

Numerical algorithms for one and two target optimal controls

Numerical algorithms for one and two target optimal controls Numerical algorithms for one and two target optimal controls Sung-Sik Kwon Department of Mathematics and Computer Science, North Carolina Central University 80 Fayetteville St. Durham, NC 27707 Email:

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Qualifying Examination

Qualifying Examination Summer 24 Day. Monday, September 5, 24 You have three hours to complete this exam. Work all problems. Start each problem on a All problems are 2 points. Please email any electronic files in support of

More information

Homework 2 - Solutions MA/CS 375, Fall 2005

Homework 2 - Solutions MA/CS 375, Fall 2005 Homework 2 - Solutions MA/CS 375, Fall 2005 1. Use the bisection method, Newton s method, and the Matlab R function fzero to compute a positive real number x satisfying: sinh x = cos x. For each of the

More information

Managing Uncertainty

Managing Uncertainty Managing Uncertainty Bayesian Linear Regression and Kalman Filter December 4, 2017 Objectives The goal of this lab is multiple: 1. First it is a reminder of some central elementary notions of Bayesian

More information

Chapter 1: The Finite Element Method

Chapter 1: The Finite Element Method Chapter 1: The Finite Element Method Michael Hanke Read: Strang, p 428 436 A Model Problem Mathematical Models, Analysis and Simulation, Part Applications: u = fx), < x < 1 u) = u1) = D) axial deformation

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

1 Finite difference example: 1D implicit heat equation

1 Finite difference example: 1D implicit heat equation 1 Finite difference example: 1D implicit heat equation 1.1 Boundary conditions Neumann and Dirichlet We solve the transient heat equation ρc p t = ( k ) (1) on the domain L/2 x L/2 subject to the following

More information

25 Self-Assessment. 26 Eigenvalue Stability for a Linear ODE. Chapter 7 Eigenvalue Stability

25 Self-Assessment. 26 Eigenvalue Stability for a Linear ODE. Chapter 7 Eigenvalue Stability Chapter 7 Eigenvalue Stability As we have seen, while numerical methods can be convergent, they can still exhibit instabilities as the number of timesteps n increases for finite t. For example, when applying

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9. PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.4) We will consider two cases 1. f(x) = 0 1-dimensional 2. f(x) = 0 d-dimensional

More information

Multi-Factor Finite Differences

Multi-Factor Finite Differences February 17, 2017 Aims and outline Finite differences for more than one direction The θ-method, explicit, implicit, Crank-Nicolson Iterative solution of discretised equations Alternating directions implicit

More information

Finite difference method for heat equation

Finite difference method for heat equation Finite difference method for heat equation Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Finite difference method for elliptic problems: I

Finite difference method for elliptic problems: I Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

Relevant self-assessment exercises: [LIST SELF-ASSESSMENT EXERCISES HERE]

Relevant self-assessment exercises: [LIST SELF-ASSESSMENT EXERCISES HERE] Chapter 6 Finite Volume Methods In the previous chapter we have discussed finite difference methods for the discretization of PDEs. In developing finite difference methods we started from the differential

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs Boundary Value Problems Numerical Methods for BVPs Outline Boundary Value Problems 2 Numerical Methods for BVPs Michael T. Heath Scientific Computing 2 / 45 Boundary Value Problems Numerical Methods for

More information

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material. Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +

More information

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Lecture Note 2: The Gaussian Elimination and LU Decomposition MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method

More information

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Jan Mandel University of Colorado Denver May 12, 2010 1/20/09: Sec. 1.1, 1.2. Hw 1 due 1/27: problems

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Lecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically

Lecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically Finite difference and finite element methods Lecture 1 Scope of the course Analysis and implementation of numerical methods for pricing options. Models: Black-Scholes, stochastic volatility, exponential

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Beam Propagation Method Solution to the Seminar Tasks

Beam Propagation Method Solution to the Seminar Tasks Beam Propagation Method Solution to the Seminar Tasks Matthias Zilk The task was to implement a 1D beam propagation method (BPM) that solves the equation z v(xz) = i 2 [ 2k x 2 + (x) k 2 ik2 v(x, z) =

More information