Numerical Analysis using Maple and Matlab

Size: px
Start display at page:

Download "Numerical Analysis using Maple and Matlab"

Transcription

1 Numerical Analysis using Maple and Matlab Dr. Seongjai Kim Professor of Mathematics Department of Mathematics and Statistics Mississippi State University Mississippi State, MS

2 Contents MA-4313/6313: Numerical Analysis I Ch.1: Mathematical Preliminaries Ch.2: Solutions of Equations in One Variable Ch.3: Interpolation and Polynomial Approximation Ch.4: Numerical Differential and Integration Ch.5: Initial-Value Problems for Ordinary Differential Equations Ch.6: Direct Methods for Solving Linear Systems MA-4323/6323: Numerical Analysis II Ch.7: Iterative Alebraic Solvers Ch.8: Approximation Theory Ch.9: Approximating Eigenvalues Ch.10: Numerical Solution of Nonlinear System of Equations Ch.11: Boundary-Value Problems of One Variable Ch.12: Numerical Solutions to Partial Differential Equations 2

3 1. Mathematical Preliminaries In This Chapter: Topics Review of Calculus Taylor's theorem Computer Arithmetic Convergence Applications/Properties Continuity & Differentiability Intermediate Value Theorem Mean Value Theorem Order/rate of convergence and Review of Linear Algebra System of linear equations Elementary row operations System of tridiagonal matrices Diagonally dominant matrices Software Vectors and matrices Norm Determinant Eigenvalues and eigenvectors Matrix inversion LU factorization Maple and Matlab 3

4 1.1. Review of Calculus Continuity Definition: A function is continuous at if in other words, if for every there exists a such that, for all x such that. Examples and Discontinuities: Definition: Let be an infinite sequence of real numbers. This sequence has the limit (converges to ), if for every there exists a positive integer such that whenever. The notation or, as means that the sequence converges to. 4

5 Theorem: If is a function defined on a set of real numbers and, the following are equivalent: is continuous at. If is any sequence in converging to, then Differentiability Definition: Let be a function defined on an open interval containing function is differentiable at, if. The exists. The number is called the derivative of at. Important theorems for continuous/differentiable functions Theorem: If the function is differentiable at, then is continuous at. Note: The converse is not true. Example: 5

6 Intermediate Value Theorem (IVT): Suppose and is a number between and. Then, there exists a number for which. Example: Show that has a solution in the interval. Solution: Define Then, is continuous on. In addition, = = 1 Thus the IVT implies that there is a number such that. Rolle's Theorem: Suppose and is differentiable on. If, then there exists a number such that. Example: 6

7 Mean Value Theorem (MVT): Suppose and is differentiable on. Then, there exists a number such that which can be equivalently written as Example: Let be defined on. Find which assigns the average slope. Solution, using Maple.,. = = = (7.2) 7

8 3 2 1 f(x) L(x) Average slope x Extreme Value Theorem: If, then exist with for all. In addition, if is differentiable on, then the numbers and occur either at the endpoints of or where is zero. Example: Find the absolute minimum and absolute maximum values of on the interval. Solution = = 8

9 Now, find the derivative of. (9.1) = x 2 f(x) f'(x) (9.2) = = (9.3) 9

10 The following theorem can be derived by applying Rolle's Theorem successively to and finally to. Generalized Rolle's Theorem: Suppose is times differentiable on. If at the distinct points, then there exists a number such that. Integration Definition: The Riemann integral of a function on the interval is the following limit, provided it exists:, where, with and x arbitrarily chosen in the subinterval. Continuous functions are Riemann integrable, which allows us to choose, for computational convenience, the points to be equally spaced in and choose, where. In this case,. 10

11 Fundamental Theorem of Calculus: Let be continuous on. Then, Part I:. Part II:, where is any antiderivative of, i.e., a function such that. Weighted Mean Value Theorem for Integrals Suppose, the Riemann integral of exists on, and does not change sign on. Then, there exists a number such that. When, it becomes the usual Mean Value Theorem for Integrals, which gives the average value of over the interval : 11

12 Taylor's Theorem Taylor's Theorem with Lagrange Remainder: Suppose, exists on, and. Then, for every,, where, for some between and,. Note that is a polynomial of degree. Example: Let and. Determine the second and third Taylor polynomials for about. Solution: = = 12

13 = = On the other hand, you can find the Taylor polynomials easily with Maple: = = x f(x) p3(x) Frequently used Taylor Series: = = = 13

14 = Note: When, and, the Taylor's Theorem reads which is the Mean Value Theorem., Taylor's Theorem with Integral Remainder: Suppose and. Then, for every, where., 14

15 Alternative Form of Taylor's Theorem: Suppose and exists on Then, for every, where, for some between and, In detail,., Example: Determine Taylor's formula for and approximate. Solution: Let. Then, since, we have Let. That is, = 15

16 = = Taylor's Theorem for Two Variables: Let. If and are points in, then where, in which lies between 0 and 1. For, the Taylor's theorem for two variables reads (1) where. Equation (1), as a linear approximation or tangent plane approximation, will be used for various applications. 16

17 Example: Find the tangent plane approximation of Solution: at. 3 (19.1) = 2 (19.2) = Thus the tangent plane approximation at is. (19.3) 17

18 Empty Page 18

19 1.2. Computer Arithmetic and Convergence Errors in Machine Numbers and Computational Results: Numbers are saved with an approximation by either rounding or chopping. integer: in 4 bites (32 bits) float: in 4 bites double: in 8 bites (64 bits) Computations can be carried out only for finite sizes of data points. Example: = = = = On the other hand, = 0. = Definition: Suppose that is an approximation to. The absolute error is, and the relative error is, provided that. Definition: The number is said to approximate to -significant digits (or figures) if is the largest nonnegative integer for which. 19

20 Computational Algorithms Definition: An algorithm is a procedure that describes, in an unambiguous manner, a finite sequence of steps to be carried out in a specific order. Algorithms consist of various steps for inputs, outputs, and functional operations, which can be described effectively by a so-called pseudocode. Definition: An algorithm is called stable, if small changes in the initial data produce correspondingly small changes in the final results. Otherwise, it is called unstable. Some algorithms are stable only for certain choices of initial data/parameters, and are called conditionally stable. Growth rates of the error: Suppose that denotes an error introduced at some stage in the computation and represents the magnitude of the error after subsequent operations. If, where is a constant independent of, then the growth of error is said to be linear, for which the algorithm is stable. If, for some, then the growth of error is exponential, which turns out unstable. 20

21 Rates (Orders) of Convergence Let be a sequence of real numbers tending to a limit. Definition: The rate of convergence is at least linear if there are a constant and an integer such that, for all. We say that the rate of convergence is at least superlinear if there exist a sequence tending to 0 and an integer such that, for all. The rate of convergence is at least quadratic if there exist a constant necessarily less than 1) and an integer such that (not, for all. In general, we say that the rate of convergence is of at least if there exist a constant (not necessarily less than 1 for ) and an integer such that, for all. Example: Consider a sequence defined recursively as (a) Find the limit of the sequence and (b) show that the convergence is quadratic. 21

22 Big and Little Notation Definition: A sequence is said to be in (big Oh) of if a positive number exists for which, for large. In this case, we say is in, and denote or. Definition: A sequence is said to be in (little oh) of if there exists a sequence tending to 0 such that, for large, (or equivalently, ). In this case, we say is in, and denote or. Example: Show that and. 22

23 Definition: Suppose. A quantity is said to be in (big Oh) of (h) if a positive number exists for which, for sufficiently small. In this case, we say is in, and denote. Little oh of (h) can be defined the same way as for sequences. Example: = which implies that is in. Note that, for sufficiently small. By the way,. Example: Choose the correct assertions (in each, ) a. b. (n+1)/ c. d. e. 23

24 Example: Determine the best integer value of in the following equation, as. Ans: Self study: Let. What are the limit and the rate of convergence of as? Ans: and, i.e., as. Self study: Show that these assertions are not true. a. as b. as c. as 24

25 Example: Let and let. Show that as. Hint. Then, we have to show as. For this, you can first get. Since is bounded, if, then, which implies. 25

26 Empty Page 26

27 1.3. Review of Linear Algebra Vectors A real -dimensional vector is an ordered set of real numbers and is usually written in the coordinate form Definitions: Let x and y be -dimensional vectors. Liner combination: Norm (length): to as the Euclidean norm, or Euclidean, which is often referred norm. Distance: Dot product:. Thus. Let be the angle between the vectors x and y. Then,. 27

28 Matrices and Two-dimensional Arrays A matrix is a rectangular array of numbers that is arranged systematically in rows and columns. A matrix having rows and columns is called an matrix, and is denoted as. When the capital letter represents a matrix, the lowercase subscribed letter denotes the -th entry of the matrix: When matrices and have the same dimensions, their linear combination can be defined as.. Properties of Vectors and Matrices Definition: If and are two matrices with the property that has as many columns as has rows, the matrix product is defined to be the matrix :, where is given as the dot product of the -th row of and the -th column of :. 28

29 Example: Find the matrix product of and. For the above example, may not be defined. (Why?) If both and are square matrices of the same dimension, then and are defined but they are in general not the same:. When it happens that, we say that and commute. Example: Compute and compare and. Solution: Using Maple, we have = = They are not the same, so the matrices do not commute. 29

30 Definition: determinant (det) of If, we define. For, let (minor of ) be the determinant of the submatrix of obtained by deleting the -th row and the -th column of. Define the cofactor of as. Then the determinant of is given by (he -th row cofactor expansion) or (he -th column cofactor expansion) Example: Then, using the first row cofactor expansion,. Example: Let Find its determinant. Solution: Then, again using the first row cofactor expansion, = = 39 # which is determinant of the above 30

31 submatrix. = 39 = = = 29 = = Thus, = = 77 You may get it directly using a Maple command: = 77 = 31

32 Example: Find the determinant of the following matrices, if it exists. a. b. c. d. 32

33 Eigenvalues and Eigenvectors Definition: An eigenvector of an matrix is a nonzero vector x such that, eigenvalue of corresponding to x. Hence, the eigenvector is a nontrivial solution of, which implies that is singular. The eigenvalues of are solutions of. Example: Find eigenvalues and eigenvectors of Ans: = 33

34 Invertible (nonsingular) Matrices Let. Definition: The matrix is invertible if there is an matrix such that. The matrix is called an inverse of, and is denoted as. Definition: The transpose of is. The matrix is symmetric if. Theorem: A square matrix can possess at most one right inverse. Theorem: Let and be invertible square matrices. Then, Theorem: If and are square matrices such that, then. Proof: Let. Then By the uniqueness of the right inverse, we can conclude implies.., which 34

35 Invertible (Nonsingular) Matrix Theorem: For, the following properties are equivalent: 1. The inverse of exists, i.e., is invertible. 2. The determinant of is nonzero. 3. The rows of form a basis for. 4. The columns of form a basis for. 5. As a map from to, is injective (one-to-one). 6. As a map from to, is surjective (onto). 7. The equation implies. 8. For each, there is exactly one such that. 9. is a product of elementary matrices is not an eigenvalue of. 35

36 System of Linear Equations Linear systems of equations of unknowns can be expressed as the algebraic system:, where u denotes the vector of unknowns, b represents the source terms (the right side), and is an matrix of real values, i.e., The above algebraic system can be solved by the elementary row operations applied to the augmented system: 36

37 Elementary Row Operations: 1. (Replacement) Replace one row by the sum of itself and a multiple of another row: (Interchange) Interchange two rows: (Scaling) Multiply all entries in a row by a nonzero constant: Example: Solve the following system of equations: Solution: = = = = 37

38 The following row operations (Gauss Elimination) solve the problem: Forward elimination: = = Back substitution: = = = = Or, you may utilize the function "ReducedRowEchelonForm": = 38

39 Thus the solution = Note: Every elementary row operation can be achieved by multiplying the augmented matrix on the left by an elementary matrix. For example, the replacements and can be represented respectively by and : Then, the upper-triangular system in the last example can be obtained as = The last result is called an echelon form, and its diagonals are called pivots. 39

40 Remarks: a. The elementary matrices for replacement row operations commute = = b. Their product is made by putting the entries below the main diagonal. c. Their inverse can be obtained by negating the entries below the main diagonal. = = = Self study: Find elementary matrices in a. b. for the row operations: c. d. 40

41 Example: Find the parabola that passes through (1,2), (2, 4), and (3,8). Use the Gauss Elimination to solve the algebraic system for the unknowns. 41

42 System of Tridiagonal Matrices Such a matrix can be saved in an array: and row operations can be applied correspondingly. Example: Use Gauss Elimination to solve the tridiagonal system of 5 equations,, of which the coefficient matrix is given in a array for nonzero entries. 42

43 Solution: (The underlined numbers are pivots.) # The last column reads the unknown u. 43

44 LU Factorization (Triangular Factorization) Definition: A nonsingular matrix has an LU factorization if it can be expressed as the product of a lower-triangular matrix an upper-triangular matrix :. In matrix form, this is written as The condition that is nonsingular implies that. With LU factorization, is written as Thus, the system can be solved by a two-step procedure: When the matrix is LU-factorizable, the factorization can be achieved by a repetitive application of replacement row operations, as in the procedure of echelon form. Here the difference is to place inverses of the elementary matrices on the left. Let be elementary matrices (for replacement row operations) such that is an echelon form of. Then, where is a lower-triangular matrix and is upper-triangular. 44

45 Thus, the LU factorization can be carried out step-by-step as in Example: Find the LU factorization of Solution: Let Then, Let Then, = and 45

46 which completes the LU factorization. Using Maple: = = = # a permutation matrix, implying no pivoting required. Here,. Alternatively, you may use: = 46

47 Example: Use replacement row operations to find the LU factorization. a. b. 47

48 Diagonally Dominant Matrices Definition: A matrix is diagonally dominant if, for all. The matrix is strictly diagonally dominant, if the above inequalities are strict for all. Theorem on Preserving Strict Diagonal Dominance: Theorem: Let be strictly diagonally dominant. Then, Gauss Elimination without pivoting preserves the strict diagonal dominance of the matric. Corollary: Thus, the work of finding pivots is not required for the LU factorization of a strictly diagonally dominant matrix, provided that the rows of the matrix is linearly independent (i.e., invertible). Corollary: Every strictly diagonally dominant matrix is nonsingular and has an LU factorization. Example: Verify the preservation of diagonal dominance of Solution: > for k from 1 by 1 to (n-1) do 48

49 m:=b(k+1,k)/b(k,k); B:=RowOperation(B,[k+1,k],-m); end do; 49

50 (12.1) > All intermediate matrices are diagonally dominant! 50

51 Norms and Error Analysis Definition: On a vector space, a norm is a function from to the set of nonnegative real numbers that obeys the following three postulates:, if, if, if (triangle inequality) Examples of norms: Example: Let. Euclidean -norm: The -norm: The -norm: Definition: If a vector norm has been specified, the matrix norm subordinate to (associated with) it is defined by It is equivalent to 51

52 Matrix norms: , where denotes the spectral radius of Definition: A condition number of a matrix is the number Example: Let, for which and Find,, and. Find the -condition number. 52

53 Theorem on Neumann Series: If is an matrix such that for any subordinate matrix norm, then is invertible and. 53

54 Empty Page 54

55 Homework 1. Review of Calculus, Convergence, and Linear Algebra #1. Prove that the following equations have at least one solution in the given intervals. a. b. c. d. #2. Let and. a. Find the third Taylor polynomial about,, and use it to approximate. b. Use the Taylor Theorem to find an upper bound for the error. Compare it with the actual error. c. Find the fifth Taylor polynomial about,, and use it to approximate. d. Use the Taylor Theorem to find an upper bound for the error. Compare it with the actual error. #3. For the fair is it true that as? a. b. c. d. 55

56 #4. Let a sequence be defined recursively by, where is continuously differentiable. Suppose that as and. Show that Hint: Begin with, and use the Mean Value Theorem and the fact that is continuously differentiable, to show that the quotient converges to zero. #5. A square matrix is said to be skew-symmetric if. Prove that if is skew-symmetric, then for all. (Hint: The quantity is scalar so that.) #6. Suppose that and are square matrices and that is invertible. Show that each of, and is invertible. #7. Find the determinant and eigenvalues of the following matrices, if it exists. Compare the determinant with the product of eigenvalues. a. b. c. 56

57 d. #8. Find the LU factorization of the matrices: a. b. #9. Show that satisfies the following three conditions:, if, if, if (triangle inequality) Hint: For the last condition, you may begin with. #10. Show that for all. 57

58 Empty Page 58

59 2. Solutions of Equations in One Variable Objective: For equations of the form, find solutions, that are real numbers such that. Note: Various nonlinear systems of equations and discretization of nonlinear PDEs can be expressed by, which is equivalently written as. In This Chapter: Topics Bisection method Fixed-point iteration Newton's method Secant method Method of false position Zeros of polynomials Horner's method Bairstow's method Applications/Properties Variant of Newton's method Application of Newton's method Effective evaluation of polynomials Quadratic factors 59

60 2.1. The Bisection (Binary-Search, Interval Halving) Method Assumptions: is continuous in. 0 By IVT, there must be a solution. There is a single solution in. Bisection: a pseudocode p=pi, and stop; else if( f(ai)*f(pi)<0 ) then a(i+1)=ai; b(i+1)=pi; else a(i+1)=pi; b(i+1)=bi; end if p(i+1)=( a(i+1)+b(i+1) )/2; end if 60

61 Example: Find the solution of the equation in. Solution: Using Maple (3.1) 4 iteration(s) of the bisection method applied to f x =x 3 C 4 x 2 K 10 with initial points a = 1.25 and b = 1.5 a p x b f x 61

62 62 (3.2)

63 Bisection: Maple code > > > k= 1: a= b= p= f(p)= k= 2: a= b= p= f(p)=

64 k= 3: a= b= p= f(p)= k= 4: a= b= p= f(p)= k= 5: a= b= p= f(p)= k= 6: a= b= p= f(p)= k= 7: a= b= p= f(p)= p_7 = dp = = (b0-a0)/2^k= f(p) = (1) 64

65 Error Analysis: Theorem: Suppose that and. Then, the Bisection method generates a sequence approximating a zero of with. Proof. For every, and. It follows from that Example: Determine the number of iterations necessary to solve with accuracy using and. Solution: We have to find the iteration count such the error bound is not larger than. That is,. Thus,, which implies that = and therefore. 65

66 Note: is the midpoint of and is the midpoint of either words, of. So,. In other which implies that The approximate solution carried out with the absolute difference for the stopping criterion guarantees the actual error not greater than the given tolerance. Example: Suppose that the bisection method begins with the interval. How many steps should be taken to compute a root with a relative error not larger than? Solution:,.. Thus, 66

67 Bisection: MATLAB code M-file: bisect.m function [c,err,fc]=bisect(f,a,b,tol) %Input - f is the function input as a string 'f' % - a and b are the left and right endpoints % - TOL is the tolerance %Output - c is the zero % - fc= f(c) % - err is the error estimate for c fa=feval(f,a); fb=feval(f,b); if fa*fb > 0,return,end max1=1+round((log(b-a)-log(tol))/log(2)); for k=1:max1 c=(a+b)/2; fc=feval(f,c); if fc==0 a=c; b=c; elseif fb*fc>0 b=c; fb=fc; else a=c; fa=fc; end if b-a < TOL, break,end end c=(a+b)/2; err=abs(b-a); fc=feval(f,c); 67

68 You can call the above algorithm with varying function, by >> f x.^3+4*x.^2-10; >> [c,err,fc]=bisect(f,1,2,0.005) c = err = fc = e-005 Example: Consider the bisection method applied to find the zero of the function with. What are? What are? Answer: k= 1: a= b= p= f(p)= k= 2: a= b= p= f(p)= k= 3: a= b= p= f(p)=

69 Example: In the bisection method, does exist? 69

70 Empty Page 70

71 2.2. Fixed-Point Iteration Definition: A number is a fixed point for a given function if. Note: Given a root-finding problem, let. Then, since, the above defines a fixed-point problem. Example: Find any fixed points of. Answer: (1.1) = (1.2) Note: (1.3) = (1.4) (1.5) 71

72 The Fixed Point y=g(x) y=x x 72

73 Theorem: If and for all, then has at least one fixed point in. If, in addition, is differentiable in and there exists a positive constant such that for all, then there is a unique fixed point in. Notes: Example: Show that has a unique fixed point on. 73

74 Proof of the Theorem: If g(a)=a or g(b)=b 0 g has a fixed point at an endpoint. If not, 0 g(a)>a and g(b)<b. Define h(x)=g(x)-x. Then, h(a)>0 and h(b)<0. Thus, by the IVT, there is p2(a,b) such that h(p)=0, which implies that g(p)=p. In addition, suppose that for all Let p and q are two fixed points; Thus which is a contradiction., for some between p and q., 74

75 Fixed-Point Iteration Definition: A fixed-point iteration is an iterative procedure of the form: For a given, If the sequence for. converges to, since is continuous, we have This implies that the limit of the sequence iteration converges to a fixed point. is a fixed point of, i.e., the Example: The equation has a unique root in. There are many ways to change the equation to the fixed-point form : a. b. *. c. d. * e. * f. The associated (fixed-point) iteration may not converge for some choices of. The real root of is. 75

76 Evaluation of and FPI = 27 (4.1) = 3 (4.2) = (4.3) 76

77 = = (4.4) = = (4.5) = 5 14 = (4.6) 77

78 Fixed-Point Theorem: Let be such that for all. Suppose that is differentiable in and there exists a positive constant such that for all Then, for any number, the sequence defined by converges to the unique fixed point. Proof: It follows from the previous theorem that there exists a unique fixed point, i.e.,. Since for all, we have for all and, by the MVT, for some. Therefore,, as. 78

79 Notes:, (Here we have used the MVT, for the last inequality.) Thus,. That is, defined on any closed subset of =. By a contractive mapping, we mean a function that satisfies for some for all Note: If a contractive mapping is differentiable, then the above implies that, for all. 79

80 In practice: is not known. Consider the following: Thus, we have which is useful for stopping of the iteration., 80

81 Example: For each of the following equations, Determine an interval on which the fixed-point iteration will converge. Estimate the number of iterations necessary to obtain approximations accurate to within. a. b. c. d. Solution: Plots x y=g1(x) y=x y=g2(x) x y=x = 1 3 =

82 x y=g3(x) y=x y=g4(x) x y=x = = 82

83 Example: Prove that the sequence convergent. defined recursively as follows is Solution Begin with setting on, then show is a contractive mapping 83

84 Empty Page 84

85 2.3. Newton's (Newton-Raphson) Method and Its Variants Let be a zero of and an approximation of and (1) Our momentary concern is how to find the correction. If exists and is continues, then by Taylor's Theorem, where lies between and. If is small, it is reasonable to ignore the last term and solve for :. Then, may be a better approximation of than. This has motivated the Newton's Method: 85

86 Graphical Interpretation: Consider the tangent line passing : Let. Then,. which is, the -intercept of the tangent line. Newton's method applied to f x =x 2, with initial point p 0 = 1. 1 f x x Tangent lines p Example of Nonconvergence: 86

87 3 iterations of Newton's method applied to f x = arctan x, with initial point p 0 =p/2 1 x f x p Tangent lines Notes: for some. As a matter of fact, Newton's method is most effective when is bounded away from zero near.. 87

88 Convergence Analysis: Let. Then,. By Taylor's Theorem, we have Thus,. Theorem: Convergence of Newton's Method Let and is such that and. Then, there is a neighborhood of such that if Newton's method is started in that neighborhood, it generates a convergent sequence satisfying for a positive constant, 88

89 Example: Since, and (4.1) which is an occasional super-convergence., Theorem on Newton's Method for a Convex Function: Let be increasing, convex, and of a zero. Then, the zero is unique and the Newton iteration will converge to it from any starting point. Example: Use Newton's method to find the square root of a positive number. Solution: Let. Then is a root of. Set and. The Newton's method reads 89

90 (6.1) (6.2) 90

91 Implicit Functions For a function implicitly defined as, if is prescribed, then the equation can be solved for using Newton's method:. Example: Produce a table of, where is defined implicitly as a function of. Use and start, proceeding in steps of 0.1 to. Solution: = 91

92 x y F(x,y) e e e e e e e e e e-10 92

93 Systems of Nonlinear Equations: Newton's method for systems of nonlinear equations follows the same strategy that was used for single equation. That is, we linearized, solve for corrections, and update the solution, repeating the steps as often as necessary. For an illustration, we begin with a pair of equations involving two variables: Supposing that is an approximate solution of the system, let us computer corrections so that will be a better approximate solution. The coefficient matrix combining linearly on the right of the above is the Jacobian of at :. Hence, Newton's method for two nonlinear equations in two variables is where 93

94 In general, the system of can be expressed as nonlinear equations, where and. Then where and is the Jacobian of at : The correction vector is obtained as. Hence, Newton's method for nonlinear equations in variables is given by where is the solution of the linear system:,,..,, 94

95 Example: Starting with, carry out 6 iterations of Newton's method to find a root of the nonlinear system Solution: 95

96 e e e e e e

97 The Secant Method Newton's method, defined as is a powerful technique; however it has a major drawback: the need to know the value of derivative of at each iteration. Frequently, is far more difficult to calculate than. To overcome the disadvantage, a number of methods have been proposed. One of most popular variants is the secant method, which replaces by a difference quotient: Thus, the resulting algorithm reads Notes: Two initial values must be given. It requires only one new evaluation of per step. The graphical interpretation of the secant method is similar to that of Newton's method. Convergence: = Graphical interpretation: 97

98 3 iteration(s) of the secant method applied to f x =x 3 K 1 with initial points a = 1.5 and b = 0.5 b x f x a Here, and. is the -intercept of the secant line joining (9.1.1) Example: Apply one iteration of the secant method to find Solution:. if = =

99 The Method of False Position: It generates approximations in a similar manner as the secant method; however, it includes a test to ensure that the root is always bracketed between successive iterations. Select and such that. Compute = the -intercept of the line joining and. If ( ), then ( and bracket the root) Choose = the -intercept of the line joining and. else Choose = the -intercept of the line joining and end if. Graphical interpretation: 99

100 3 iteration(s) of the method of false position applied to f x =x 3 K 1 with initial points a = 1.5 and b = 0.5 b x f x a Here, the root is bracketed in all iterations. (10.1) 100

101 Comparison: Convergence Speed Find a root for, starting with or. (11.1) (11.2) (11.3) 101

102 n Newton Secant False Position

103 A polynomial of degree has a form 2.4. Zeros of Polynomials where 's are called the coefficients of and. Theorem on Polynomials Fundamental Theorem of Algebra: Every nonconstant polynomial has at least one root (possibly, in the complex field). Complex Roots of Polynomials: A polynomial of degree has exactly roots in the complex plane, it being agreed that each root shall be counted a number of times equal to its multiplicity. That is, there are unique (complex) constants and unique integers such that Localization of Roots: All roots of the polynomial centered at the origin and of radius of lie in the open disk Uniqueness of Polynomials: Let and be polynomials of degree. If, with, are distinct numbers with for, then for all values of. Particularly, two polynomials of degree are the same if they agree at values. 103

104 Horner's Method Known as nested multiplication and also as synthetic division, Horner's method can evaluate polynomials very efficiently; it requires multiplications and additions to evaluate an arbitrary th-degree polynomial. Let us try to evaluate at. Then, utilizing the Remainder Theorem, we first can rewrite the polynomial as where is a polynomial of degree, say Substituting the above into (1) and setting equal the coefficients of like powers of on the two sides of the resulting equation, we have. (1) which can be written If the calculation of Horner's algorithm is to be carried out with pencil and paper, the following arrangement is often used (known as synthetic division): 104

105 a n a nk1 a nk2,,, a 0 x 0 x 0 b n x 0 b nk1,,, x 0 b 1 b n b nk1 b nk2,,, P x 0 =b 0 Example: Use Horner's algorithm to evaluate, where Solution We arrange the calculation as mentioned above. 3 1 K4 7 K5 K2 3 K K = P 3 Thus,, and Recall the Newton's method applied for finding an approximate zero of : When the method is being used to find an approximate zero of a polynomial, both and must be evaluated at the same point in each iteration. The derivative can be evaluated by using the Horner's method with the same efficiency. Indeed, differentiating (1) reads. Thus. 105

106 Example: Evaluate for considered in the previous example. Solution As in the previous example, we arrange the calculation and carry out the synthetic division one more time: 1 K4 7 K5 K2 3 3 K Thus,. 3 1 K = P =Q 3 =P' 3 Example: Implement the Horner's algorithm to evaluate and, where Solution. # This algorithm is equivalent to the one on p

107 = P(3)=19, P'(3)=37 The Maple command coeff can be used like Horner's method will be presented in a more systematic fashion when we deal with Polynomial Interpolation. 107

108 Complex Zeros: Finding Quadratic Factors Quadratic Factors of Real-coefficient Polynomials: Let. Theorem on Real Quadratic Factor: If is a polynomial whose coefficients are all real, and if is a nonreal root of, then z is also a root and is a real quadratic factor of. Polynomial Factorization: If is a nonconstant polynomial of real coefficients, then it can be factorized as a multiple of linear and quadratic polynomials of which coefficients are all real. Theorem on Quotient and Remainder: If the polynomial is divided by the quadratic polynomial remainder, then the quotient and can be computed recursively by setting and then using 108

109 Bairstow's Method Bairstow's method seeks a real quadratic factor of of the form. For simplicity, all the coefficients 's are real so that both and will be real. In order for the quadratic polynomial to be a factor of, the remainder must be zero. That is, the process seeks Note that and must be functions of (, which is clear from the last theorem. An outline of the process is as follows: Starting values are assigned to. We seek corrections so that Linearization of these equations reads Thus, the corrections can be found by solving the linear system 109

110 Now, the question is how to compute the Jacobian matrix. As first appeared in the appendix of the 1920 book "Applied Aerodynamics" by Leonard Bairstow, we consider the partial derivatives Differentiating the recurrence relation (the shaded, boldface equation in the last theorem) results in the following pair of additional recurrences: Note that these recurrence relations obviously generate the same two sequence ( ); we need only the first. The Jacobian explicitly reads and therefore We summarize the above procedure as in the following code: 110

111 Bairstow's algorithm: 111

112 e e e e e e-17 Q(x) = (1)x^2 + ( )x^1 + ( ) Remainder: e-18 (x - ( )) + ( e-16) Quadratic Factor: x^2 - ( )x - ( ) Zeros: ( ) i 112

113 Deflation Given a polynomial of degree, (say, ), it will be written as Then, we can find a second approximate zero, if the Newton's method finds a zero by applying Newton's method to the reduced polynomial. (or, a quadratic factor) of ; the computation continues up to the point that is factorized by linear and quadratic factors. The procedure is called deflation. The accuracy difficulty with deflation is due to the fact that when we obtain the approximate zeros of, Newton's method is used on the reduced polynomials. An approximate zero of will generally not approximate a root of as well as a root of the reduced polynomial, and inaccuracy increases as increases. One way to overcome this difficulty is to (a) use the method of reduced equations to find approximate zeros and then (b) improve these zeros by applying Newton's method to the original polynomial. 113

114 Empty Page 114

115 Homework 2. Solutions of Equations in One Variable #1. Let the bisection method be applied to a continuous function, resulting in intervals. Let and. Which of these statements can be false? a. b. c. d. e. as #2. Modify the provided Matlab code for the bisection method to incorporate Consider the following equations defined on the given intervals: I. II. For each of the above equations, a. Use Maple to find the analytic solution on the interval. b. Find the approximate root by using your Matlab with. c. Report, for, in a table format. 115

116 #3. Let us try to find by sung fixed-point iterations. Use the fact that the result must be the positive solution of to solve the following: a. Introduce three different fixed-point forms of which at least one is convergent. b. Rank the associated iterations based on their apparent speed of convergence for. c. Perform three iterations, if possible, on each of the iterations with, and measure. #4. Kepler's equation in astronomy reads, with. a. Show that for each, there exists an satisfying the equation. b. Interpret this as a fixed-point problem. c. Find 's for using the fixed-point iteration. Set. (Hint: For (a), you may have to use the IVT for defined on, while for (b) you should rearrange the equation in the form of. For (c), you may use any source of program which utilizes the fixed-point iteration.) #5. Consider a variation of Newton's Method in which only one derivative is needed; that is, Find and such that (Hint: You may have to use ) #6. Starting with, carry out two iterations of Newton's method on the system: 116

117 #7. Consider the polynomial a. Use Horner's algorithm to find. b. Use the Newton's method to find a real-valued root, starting with and applying Horner's algorithm for the evaluation of c. Apply Bairstow's method, with the initial point, to find a pair of complex-valued zeros. d. Find a disk centered at the origin that contains all the roots. and 117

118 Empty Page 118

119 3. Curve Fitting: Interpolation and Approximation In This Chapter: Topics Polynomial interpolation Newton form Lagrange form Chebyshev polynomial Divided differences Neville's method Applications/Properties The first step toward approximation theory Basis functions for various applications including FEM Optimized interpolation Evaluation of interpolating polynomials Hermite interpolation Requires and Spline interpolation B-splines Parametric curves Rational interpolation FEM for 4th-order PDEs Less oscillatory interpolation Curves in the plane or space Interpolation of rough data with minimum oscillation Research project 119

120 3.1. Polynomial Interpolation Each continuous function can be approximated (arbitrarily close) by a polynomial, and polynomials of degree interpolating values at distinct points are all the same polynomial, as shown in the following theorems. Weierstrass Approximation Theorem Suppose. Then, for each, there exists a polynomial such that, for all. Example: = 120

121 f(x) p0 p2 p4 p x Theorem on Polynomial Interpolation If are distinct real numbers, then for arbitrary values, there is a unique polynomial of degree at most such that Proof: (Uniqueness). Suppose there were two such polynomials, and. Then degree of would have the property for. Since the is at most, the polynomial can have at most zeros unless it is a zero polynomial. Since are distinct, has zeros and therefore it must be 0. Hence,. (Existence). For the existence part, we proceed inductively through construction. For, the existence is obvious since we may choose the constant function. Now suppose that we have obtained a polynomial. of degree 121

122 with We try to construct in the form for. for some constant. Note that this is unquestionably a polynomial of degree. Furthermore, interpolates the data that interpolates:. Now we determine the constant to satisfy the condition, which leads to This equation can certainly be solved for : because the denominator is not zero. (Why?) 122

123 Newton Form of the Interpolating Polynomials As in the proof of the previous theorem, each is obtained by adding a single term to. Thus, at the end of the process, will be a sum of terms and will be easily visible in the expression of. Each has the form The compact form of this is (1) (Here the convention has been adopted that when.) The first few cases of (1) are These polynomials are called the interpolation polynomials in Newton's form. Illustration of Newton's interpolating polynomials: 1 (5.1) 123

124 (5.2) (5.3) (5.4) (5.5) 124

125 3 2 1 f(x) p0 p1 p2 p3 p x Evaluation of, assuming that are known: We may use an efficient method called nested multiplication or Horner's algorithm. This can be explained most easily for an arbitrary expression of the form The idea is to write it in the form (6.1) 125

126 Thus the algorithm for the evaluation of can be written as 126

127 Now we can write an algorithm for computing the coefficient (1): in Equation The computation of, using Horner's algorithm: A more efficient procedure exists that achieves the same result. The alternative method uses divided differences to compute the coefficients. The method will be presented later. 127

128 Example: For (2) Four values of this function are given as Construct the Newton form of the polynomial from the data. Newton form of the polynomial: = # Since, the coefficients are # # which is the same as the one in (2). = (8.1) 128

129 Polynomial interpolation x data points interpolating polynomial - newton given function (8.2) 129

130 Example: Find the Newton's form of the interpolating polynomial of the data Answer: 130

131 Lagrange Form of the Interpolating Polynomials Let data points be given, where abscissas are distinct. The interpolating polynomial will be sought in the form, where are polynomials that depend on the nodes, but not on the ordinates. How to determine the basis : Let all the ordinates be 0 except for a 1 occupying -th position, that is, and other ordinates are all zero. Then, On the other hand, the polynomial interpolation the data must satisfy, where is the Kronecker delta which is 1 if and 0 if. Thus all the basis polynomials must satisfy for all. Polynomials satisfying such a property are known as the cardinal functions. Now, let us try to construct. It is to be an th-degree polynomial that takes the value 0 at and the value 1 at. Clearly, it must be of the form The constant is obtained by putting :. 131

132 and. Hence, we have Each cardinal function is obtained by similar reasoning; the general formula is then Example: Find an interpolation formula for the two-point table Solution: 132

133 Example: Determine the Lagrange interpolating polynomial that passes through and. Answer:. Maple plot: Polynomial interpolation x data points interpolating polynomial - lagrange 133

134 Example: Use to find the second Lagrange interpolating polynomial for. Use to approximate. Solution: Maple: (11.1) 134

135 (11.2) (11.3) f(x) p2(x) x = Polynomial Interpolation Error Theorem: Let, and let be the polynomial of degree that interpolates at distinct points in the interval. Then, for each, there exists a number between, hence in the interval, such that. 135

136 Example: For the previous example, determine the error bound in. Solution (13.1) = = 3 8 (13.2) Thus, = Example: If the function is approximated by a polynomial of degree 5 that interpolates at six equally distributed points in including end points, how large is the error on this interval? Solution The nodes are -1, -0.6, -0.2, 0.2, 0.6, and 1. It is easy to see that. Thus, = (14.1) 136

137 Interpolation Error for Equally Spaced Nodes: Polynomial Interpolation Error Theorem for Equally Spaced Nodes: Let, and let be the polynomial of degree that interpolates at Then, for each, where Here, as a proof, we consider bounding Start by picking an. We can assume that is not one of the nodes, because otherwise the product in question is zero. Let, for some. Then we have Now note that Thus Since, we can reach the following bound 137

138 (3) The result of the theorem follows from the above bound. Example: How many equally spaced nodes are required to interpolate to within on the interval? 138

139 Chebyshev Polynomials In the Polynomial Interpolation Error Theorem, there is a term that can be optimized by choosing the nodes in a special way. An analysis of this problem was first given by a great mathematician Chebyshev ( ). The optimization process leads naturally to a system of polynomials called Chebyshev polynomials. The Chebyshev polynomials (of the first kind) are defined recursively as follows: The explicit forms of the next few are readily calculated: (17.1) (17.2) (17.3) (17.4) (17.5) 139

140 1 0 1 x T0 T1 T2 T3 T4 Theorem on Chebyshev Polynomials: For polynomials have this closed-form expression:, the Chebyshev It has been verified that if the nodes, and its minimum value will be attained if (4) The nodes then must be the roots of T, which are, for, (5) 140

141 Theorem on Interpolation Error, Chebyshev Nodes: If the nodes are the roots of the Chebyshev polynomial, as in (5), then the error bound for the th-degree interpolating polynomial. reads Example: If the function is approximated by a polynomial of degree 5 that interpolates at roots of the Chebyshev polynomial in, how large is the error on this interval? Solution It is easy to see that. Thus, It is an optimal upper bound of the error and smaller than the one in Equation (14.1). (19.1) 141

142 Accuracy comparison between Uniform nodes and Chebyshev nodes. (20.1) 142

143 x Uniform nodes Chebyshev nodes More details for Chebyshev polynomials is treated in Chapter 8, which will be covered in Numerical Analysis II. 143

144 Empty Page 144

145 3.2. Divided Differences It turns out that the coefficients for the interpolating polynomials in Newton's form can be calculated relatively easily by using divided differences. Recall: For polynomials are of the form, the th-degree Newton interpolating for which. The first few cases are The coefficient Thus, we have and therefore is determined to satisfy. (1.1) (1.2) Now, since it follows from the above and (1.1) and (1.2) that, 145

146 We know that for distinct real numbers (nodes),, there is a unique polynomial of degree at most that interpolates at the nodes: We now introduce the divided differences. Definition: The zeroth divided difference of the function with respect to, denoted, is the value of at : The remaining divided differences are defined recursively; the first divided difference of with respect to and is defined as The second divided difference relative to is defined as In general, the th divided difference relative to defined as is Note: For the th-degree Newton interpolating polynomials, we can see In general, 146

147 DD1 ( DD2 DD3 Newton's Divided Difference Formula: Input: and, saved as Output: Step 1: For For Step 2: Return 147

148 Example: = In detail: Thus, 148

149 Example: Determine the Newton interpolating polynomial for the data: Example: Prove that if is a polynomial of degree, then for all. 149

150 Properties of Divided Differences: Permutations in Divided Differences: The divided difference is a symmetric function of its arguments. That is, if is a permutation of, then Error in Newton Interpolation: Let be the polynomial of degree that interpolates at distinct nodes,. If is a point different from the nodes, then. Proof: Let be the polynomial of degree at most that interpolates at nodes,. Then, we know that is obtained from by adding one term. In fact, Since, the result follows. Derivatives and Divided Differences: If and if are distinct points in, then there exists a point such that. Proof: Let be the polynomial of degree at most that interpolates at. By the Polynomial Interpolation Error Theorem, there 150

151 exists a point such that. On the other hand, by the previous theorem, we have The theorem follows from the comparison of above two equations. Example: Prove that for,, for some. (Hint: Use the last theorem; employ the divided difference formula to find.). 151

152 Empty Page 152

153 3.3. Data Approximation and Neville's Method We have studied how to construct interpolating polynomials. A frequent use of these polynomials involves the interpolation of tabulated data. In this case, an explicit representation of the polynomial might not be needed, only the values of the polynomial at specified points. In this situation the function underlying the data might be unknown so the explicit form of the error cannot be used to assure the accuracy of the interpolation. Neville's Method provides an adaptive mechanism for the evaluation of accurate interpolating values. Definition: Let be defined at, and suppose that are distinct integers, with for each. The polynomial that agrees with at the points is denoted by. 153

154 Example: Suppose that and. Determine the interpolating polynomial and use this polynomial to approximate. Solution: It can be the Lagrange polynomial that agrees with at : Thus On the other hand, 154

155 Theorem: Let be defined at. Then, for each,, which is the polynomial interpolating at. Note: The above theorem implies that the interpolating polynomial can be generated recursively. For example, and so on. They are generated in the manner shown in the following table, where each row is completed before the succeeding rows are begun. 155

156 For simplicity in computation, we may try to avoid multiple subscripts by defining the new variable. Then the above table can be expressed as 156

157 Example: Let Neville's method to approximate accuracy.. Use in a four-digit Solution: (2.1) (2.2) (2.3) Note: Thus accuracy. is already in a four-digit = = The real value is = The absolute error: 157

158 Neville's Iterated Interpolation: Input: the nodes ; the evaluation point ; the tolerance ; and values saved in the first column of. Output: Step 1: For For If } Step 2: Return 158

159 Example: Neville's method is used to approximate the following table., giving Determine. 159

160 Empty Page 160

161 3.4. Hermite Interpolation The Hermite interpolation refers to the interpolation of a function and some of its derivatives at a set of nodes. When a distinction is being made between this type of interpolation and its simpler type (in which no derivatives are interpolated), the latter is often called Lagrange interpolation. Basic Concepts of Hermite Interpolation: For example, we require a polynomial of least degree that interpolates a function and its derivative at two distinct points, say and. The polynomial sought will satisfy these four conditions: Since there are four conditions, it seems reasonable to look for a solution in, the space of all polynomials of degree at most 3. Rather than writing in terms of, let us write it as because this will simplify the work. This leads to, The four conditions on can now be written in the form Thus, the coefficients can be obtained easily. 161

162 Hermite Interpolation Theorem: If and are distinct, then the unique polynomial of least degree agreeing with and at is the Hermite polynomial of degree at most given by, where Here, is the th Lagrange polynomial of degree. Moreover, if, then Construction of Hermite Polynomials using Divided Differences Recall: The polynomial that interpolates at is given 162

163 Construction of Hermite Polynomials: Define a new sequence by Then the Newton form of the Hermite polynomial is given by with Note: For each, being replaced by. The extended Newton divided difference table: First divided differences Higher divided differences as usual 163

164 Example: Use the extended Newton divided difference method to obtain a cubic polynomial that takes these values: Example (continuation): Find a quartic polynomial in the preceding example and, in addition, satisfies. that takes values given 164

165 Runge's phenomenon: 3.5. Spline Interpolation The section begins with the so-called Runge's phenomenon. Recall: Weierstrass Approximation Theorem Suppose. Then, for each, there exists a polynomial such that, for all. Interpolation at equidistant points is a natural and well-known approach to construct approximating polynomials. Runge's phenomenon demonstrates, however, that interpolation can easily result in divergent approximations. Consider the function: Runge found that if this function is interpolated at equidistant points the resulting interpolation oscillates toward the end of the interval, i.e. close to -1 and 1. It can even be proven that the interpolation error tends toward infinity when the degree of the polynomial increases:, 165

166 1 f(x) P7 P x Mitigations to the problem: Change of interpolation points: e.g., Chebyshev nodes Use of piecewise polynomials: e.g., Spline interpolation Constrained minimization: e.g., Hermite-like higher-order polynomial interpolation, whose first (or second) derivative has minimal norm. 166

167 Spline Interpolation Definition: A partition of the interval such that is an ordered sequence The numbers are known as knots or nodes. Definition: A function is a spline of degree on if 1. The domain of is. 2. There exits a partition of such that on each,. 3. are continuous on. Linear Splines: A linear spline is a continuous function which is linear on each subinterval. Thus it is defined entirely by its values at the nodes. That is, given the linear polynomial on each subinterval is defined as Example: Find the linear spline for 167

168 Solution: The linear spline can be easily computed as Its graph is as shown below. L(x) 0 1 x 168

169 First-Degree Spline Accuracy To find the error bound, we will consider the error on a single subinterval of the partition, and apply a little calculus. Let be the linear polynomial interpolating at the endpoints of. Then, for some. Thus where 169

170 Quadratic (Second Degree) Splines: A quadratic spline is a piecewise quadratic function, of which the derivative is continuous on. Typically, a quadratic spline is defined by its piecewise polynomials, Thus there are parameters to define. For each of the subintervals, the data equations regarding, gives two This is equations. The continuity condition on gives a single equation for each of the internal nodes: This totals equations, but unknowns. Thus some additional userchosen condition is required, for example, 170

171 Computing quadratic splines: Let and suppose that the additional condition is given by specifying. Because we have, for, By integrating it and using, In order to determine, we use the above at : which implies Thus we have 171

172 Example: Find the quadratic spline for Solution: z[0]=0 z[1]=17 z[2]= z[3]= z[4]= The graph of are computed as is superposed over the graph of the linear spline Q(x) L(x) 0 1 x 172

173 Cubic Splines: From the definition, a cubic spline is a continuous piecewise cubic polynomial whose first and second derivatives are again continuous. I guess that at this moment, your brain may have a clear blue print for how to construct such a cubic spline. Anyway, let us first consider the constructability of cubic splines. On each subinterval, we have to determine coefficients of a cubic polynomial; the numer of unknowns On the other hand, the number of equations we can get is interpolation and continuity of continuity of continuity of Thus are equations; two degrees of freedom remain, and there have been various ways of choosing them to advantage. 173

174 Construction of Cubic Splines: Now we derive the equation for on the interval. Similarly as for quadratic splines, we define. Then, is a linear function satisfying and therefore is given by the straight line between and : (1) where If (1) is integrated twice, the result reads. (2) The interpolation conditions and can now be imposed on in order to determine and. That is, 174

175 Thus the result is (3) Equation (3) is easily verified; simply let and to see that the interpolation conditions are fulfilled. Once the values of have been determined, Equation (3) can be used to evaluate for. The values of for : Equation (3) gives us can be determined from the continuity conditions by differentiating. (4) Then substitution of and simplification lead to Analogously, using (3) to obtain, we have When the right sides of the last two equations are set equal to each other, the result can be written as 175

176 (5) for. Note: There are equations in (5), while we must determine unknowns,. There are two popular approaches for the choice of the two additional conditions. Natural Cubic Spline Clamped Cubic Spline Natural Cubic Splines: Let. The system of linear equations in (5) can be written as where 176

177 and Since the matrix is strictly diagonally dominant, the system can be solved by Gaussian elimination without pivoting. Clamped Cubic Splines: Let and be prescribed. The two extra conditions read, which are equivalently expressed as and. Utilizing Equation (4), the conditions read Equation (5) with the above two quations clearly make conditions for unknowns,. It is a good exercise to compose an algebraic system for the computation of clamped cubic splines. 177

178 Example: Find the natural cubic spline for Solution: The graph of is superposed over graphs of the linear spline and the quadratic spline. S(x) Q(x) L(x) 0 1 x 178

179 Example: Find a natural cubic spline that interpolates the data = S(x) Q(x) L(x) x 179

180 Optimality Theorem for Natural Cubic Splines We now present a theorem to the effect that the natural cubic spline produces the smoothest interpolating function. The word smooth is given a technical meaning in the theorem. Theorem: Let be continuous in and. If is the natural cubic spline interpolating at the nodes for, then 180

181 Consider the data of the form: 3.6. Parametric Curves of which the point plot is given Then none of the interpolation methods we have learnt so far can be used to generate an interpolating curve for this data, because the curve cannot be expressed as a function of one coordinate variable to the other. In this section we will see how to represent general curves by using a parameter to express both the - and -coordinate variables. 181

182 Example: Construct a pair of interpolating polynomials, as a function of, for the data: Solution

183 Applications in Computer Graphics: Required: Rapid generation of smooth curves that can be quickly and easily modified. Preferred: Change of one portion of a curve should have little or no effect on other portions of the curve. piecewise cubic Hermite polynomial. Note: For data, the piecewise cubic Hermite polynomial can be generated independently in each portion. (Why?) 183

184 Piecewise cubic Hermite polynomial for General Curve Fitting: Let us focus on the first portion of the piecewise cubic Hermite polynomial interpolating between and For the first portion, the given data are Only six conditions are specified, while the cubic polynomials and each have four parameters, for a total of eight. This provides flexibility in choosing the pair of cubic polynomials to specify the conditions. Notice that the natural form for determining and requires that we specify and. Since the slopes at the endpoints can be expressed using the so-called guidepoints which are to be chosen from the desired tangent line: guidepoint for Thus, guidepoint for and therefore we may specify 184

185 The cubic Hermite polynomial, y(t)) on [0,1]: The unique cubic Hermite polynomial satisfying can be computed as Similarly, the cubic Hermite polynomial satisfying is 185

186 Example: Determine the parametric curve when Solution: Let and. Then, : a guidepoint of : a guidepoint of The cubic Hermite polynomial on that satisfies is t The cubic Hermite polynomial on that satisfies (5.1) (5.2) # On the other hand (5.3) 186

187 (5.4) t 187

188 Empty Page 188

189 Homework 3. Curve Fitting: Interpolation and Approximation #1. For the given functions, let. Construct interpolation polynomials of degree at most one and at most two to approximate absolute error. a. b., and find the #2. Use the Polynomial Interpolation Error Theorem to find an error bound for the approximations in Problem #1. #3. The polynomial interpolates the first four points in the table: By adding one additional term to, find a polynomial that interpolates the whole table. #4. Determine the Newton interpolating polynomial for the data: #5. Neville's method is used to approximate, giving the following table. Determine. 189

190 #6. Use the extended Newton divided difference method to obtain a quintic polynomial that takes these values: #7. Compose an algebraic system, of the form, explicitly for the computation of clamped cubic splines. #8. Find a natural cubic spline for the data. #9. Construct the piecewise cubic Hermite interpolating polynomial for 190

191 #10. Let be the unit circle of radius 1:. Find a piecewise cubic parametric curve that interpolates the circle at Hint: For the first portion, you may set = = Now, you should find parametric curves for the other two portions. 191

192 Empty Page 192

193 4. Numerical Differentiation and Integration In This Chapter: Topics Numerical Differentiation Applications/Properties Three-point rules Five-point rules Richardson extrapolation Combination of low-order differences Numerical Integration Trapezoid rule Newton-Cotes formulas Simpson's rule Simpson's Three-Eights rule Romberg integration Gaussian Quadrature Legendre polynomials Method of undetermined coefficients or orthogonal polynomials 193

194 4.1. Numerical Differentiation Note: This formula gives an obvious way to generate an approximation of : Let and be the first Lagrange polynomial interpolating. Then Differentiating gives Thus 194

195 Definition: For, Example: Use the forward-difference formula to approximate using. at Solution: = = (2.1) = (2.2) The error becomes half? (2.3) 195

196 In general: Let be distinct points in some interval and. Then Its derivative reads Hence, Definition: An -point difference formula to approximate 196

197 Three-Point Formulas ( : For convenience, let Recall: Thus, the three-point endpoint formulas and the three-point midpoint formula read 197

198 Summary: Numerical Differentiation, the 1. -point formula 2. Five-Point Formulas: Let Second-Derivative Midpoint Formula We may derive the formula by using Taylor expansion. The Taylor expansion can be used for the derivation of the first-derivative difference formulas. Derivation: 198

199 Example: Use the second-derivative midpoint formula to approximate for using,0.05. Solution (5.1) (5.2) (5.3) 14 (5.4) 199

200 Empty Page 200

201 4.2. Richardson's Extrapolation Richardson's extrapolation is used to generate high-accuracy difference results while using low-order formulas. Let us exemplify the three-point midpoint formula Note that in this infinite series, the error series is evaluated at the same point,. Derivation using the Taylor's Series Expansion: 201

202 The last equation can be written as where, an unknown constant, and is an approximation of using the parameter. Write out the above equation with replaced by The leading term in the error series,, can be eliminated as follows Thus, we have The above equation embodies the first step in Richardson extrapolation. It show that a simple combination of two second-order approximations, and, furnishes an estimate of with accuracy. We rewrite the above equation as Then, similarly, Subtracting (M41) from 16 times (M42) to produce the new formula: 202

203 The above idea can be applied recursively. The complete algorithm, allowing for steps of Richardson extrapolation algorithm, is given next: 1. Select a convenient and compute 2. Compute additional quantities by the formula Table 1: Richardson Extrapolation Note: One can prove The second step in the algorithm can be rewritten for a column-wise computation: 203

204 Example: Let using. Use the Richardson extrapolation to estimate Solution (2.2) (2.3) (2.5) Error: = = (2.6) The Ratio: = The error: = = = = = = =

205 Using the Taylor's Series Expansion, we can reach at Example: Produce a Richardson extrapolation table for the approximation of, for the problem considered in the previous example. Solution: (3.1) (3.2) (3.3) (3.4) (3.5) Error: The Ratio: = = = (3.6) 205

206 The error: = = = = = = = 206

207 4.3. Numerical Integration Numerical integration can be performed by (1) approximation the function by a th-degree polynomial and (2) integrating the polynomial over the prescribed interval. What a simple task it is! Let be distinct points (nodes) in. Then the Lagrange interpolating polynomial reads which interpolates the function. Then, as just mentioned, we simply approximate In this way, we obtain a formula that can be used on any. It reads as follows: where (1) The formula of the form in (1) is called a Newton-Cotes formula if the nodes are equally spaced. 207

208 The Trapezoid Rule: The simplest case results if and the nodes are and. In this case, Since and (Here we have used the Mean Value Theorem on Integral because does not change the sign over.) The corresponding quadrature formula is This is known as the trapezoid rule. 208

209 Graphical interpretation: x An animated approximation of f x dx using trapezoid rule, where f x =x 3 C 2C sin 2 p x and the partition is uniform. The approximate value of the integral is Number of partitions used:

210 Composite Trapezoid Rule: If the interval is partitioned like this: then the trapezoid rule can be applied to each subinterval. Here the nodes are not necessarily uniformly spaced. Thus, we obtain the composite trapezoid rule: With uniform spacing the composite trapezoid rule takes the form for which the composite error becomes 210

211 Example: 211

212 Simpson's Rule Simpson's rule results from integrating over polynomial with equally spaced nodes: the second Lagrange The elementary Simpson's rule reads which is reduced to 212

213 Graphical interpretation: 213

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. MATRICES After studying this chapter you will acquire the skills in knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. List of

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Preface. 2 Linear Equations and Eigenvalue Problem 22

Preface. 2 Linear Equations and Eigenvalue Problem 22 Contents Preface xv 1 Errors in Computation 1 1.1 Introduction 1 1.2 Floating Point Representation of Number 1 1.3 Binary Numbers 2 1.3.1 Binary number representation in computer 3 1.4 Significant Digits

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3 Analysis Math Notes Study Guide Real Analysis Contents Ordered Fields 2 Ordered sets and fields 2 Construction of the Reals 1: Dedekind Cuts 2 Metric Spaces 3 Metric Spaces 3 Definitions 4 Separability

More information

Part IB - Easter Term 2003 Numerical Analysis I

Part IB - Easter Term 2003 Numerical Analysis I Part IB - Easter Term 2003 Numerical Analysis I 1. Course description Here is an approximative content of the course 1. LU factorization Introduction. Gaussian elimination. LU factorization. Pivoting.

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Chapter 9: Systems of Equations and Inequalities

Chapter 9: Systems of Equations and Inequalities Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes

More information

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Math 471. Numerical methods Root-finding algorithms for nonlinear equations Math 471. Numerical methods Root-finding algorithms for nonlinear equations overlap Section.1.5 of Bradie Our goal in this chapter is to find the root(s) for f(x) = 0..1 Bisection Method Intermediate value

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF ELEMENTARY LINEAR ALGEBRA WORKBOOK/FOR USE WITH RON LARSON S TEXTBOOK ELEMENTARY LINEAR ALGEBRA CREATED BY SHANNON MARTIN MYERS APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF When you are done

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

MAT 460: Numerical Analysis I. James V. Lambers

MAT 460: Numerical Analysis I. James V. Lambers MAT 460: Numerical Analysis I James V. Lambers January 31, 2013 2 Contents 1 Mathematical Preliminaries and Error Analysis 7 1.1 Introduction............................ 7 1.1.1 Error Analysis......................

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Harbor Creek School District

Harbor Creek School District Unit 1 Days 1-9 Evaluate one-sided two-sided limits, given the graph of a function. Limits, Evaluate limits using tables calculators. Continuity Evaluate limits using direct substitution. Differentiability

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

A NEW SET THEORY FOR ANALYSIS

A NEW SET THEORY FOR ANALYSIS Article A NEW SET THEORY FOR ANALYSIS Juan Pablo Ramírez 0000-0002-4912-2952 Abstract: We present the real number system as a generalization of the natural numbers. First, we prove the co-finite topology,

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Math 0031, Final Exam Study Guide December 7, 2015

Math 0031, Final Exam Study Guide December 7, 2015 Math 0031, Final Exam Study Guide December 7, 2015 Chapter 1. Equations of a line: (a) Standard Form: A y + B x = C. (b) Point-slope Form: y y 0 = m (x x 0 ), where m is the slope and (x 0, y 0 ) is a

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn Review Taylor Series and Error Analysis Roots of Equations Linear Algebraic Equations Optimization Numerical Differentiation and Integration Ordinary Differential Equations Partial Differential Equations

More information

MTH603 FAQ + Short Questions Answers.

MTH603 FAQ + Short Questions Answers. Absolute Error : Accuracy : The absolute error is used to denote the actual value of a quantity less it s rounded value if x and x* are respectively the rounded and actual values of a quantity, then absolute

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Chapter 1 The Real Numbers

Chapter 1 The Real Numbers Chapter 1 The Real Numbers In a beginning course in calculus, the emphasis is on introducing the techniques of the subject;i.e., differentiation and integration and their applications. An advanced calculus

More information

Theorems. Theorem 1.11: Greatest-Lower-Bound Property. Theorem 1.20: The Archimedean property of. Theorem 1.21: -th Root of Real Numbers

Theorems. Theorem 1.11: Greatest-Lower-Bound Property. Theorem 1.20: The Archimedean property of. Theorem 1.21: -th Root of Real Numbers Page 1 Theorems Wednesday, May 9, 2018 12:53 AM Theorem 1.11: Greatest-Lower-Bound Property Suppose is an ordered set with the least-upper-bound property Suppose, and is bounded below be the set of lower

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

CPE 310: Numerical Analysis for Engineers

CPE 310: Numerical Analysis for Engineers CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public

More information

INDEX. Bolzano-Weierstrass theorem, for sequences, boundary points, bounded functions, 142 bounded sets, 42 43

INDEX. Bolzano-Weierstrass theorem, for sequences, boundary points, bounded functions, 142 bounded sets, 42 43 INDEX Abel s identity, 131 Abel s test, 131 132 Abel s theorem, 463 464 absolute convergence, 113 114 implication of conditional convergence, 114 absolute value, 7 reverse triangle inequality, 9 triangle

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

LINEAR SYSTEMS, MATRICES, AND VECTORS

LINEAR SYSTEMS, MATRICES, AND VECTORS ELEMENTARY LINEAR ALGEBRA WORKBOOK CREATED BY SHANNON MARTIN MYERS LINEAR SYSTEMS, MATRICES, AND VECTORS Now that I ve been teaching Linear Algebra for a few years, I thought it would be great to integrate

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Nonlinear Equations. Chapter The Bisection Method

Nonlinear Equations. Chapter The Bisection Method Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y

More information

STATE COUNCIL OF EDUCATIONAL RESEARCH AND TRAINING TNCF DRAFT SYLLABUS.

STATE COUNCIL OF EDUCATIONAL RESEARCH AND TRAINING TNCF DRAFT SYLLABUS. STATE COUNCIL OF EDUCATIONAL RESEARCH AND TRAINING TNCF 2017 - DRAFT SYLLABUS Subject :Mathematics Class : XI TOPIC CONTENT Unit 1 : Real Numbers - Revision : Rational, Irrational Numbers, Basic Algebra

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information