Root Finding Convergence Analysis
|
|
- Brice Gilbert Barrett
- 5 years ago
- Views:
Transcription
1 Root Finding Convergence Analysis Justin Ross & Matthew Kwitowski November 5, 2012 There are many different ways to calculate the root of a function. Some methods are direct and can be done by simply solving for x, while other methods require multiple steps and the results contain some error. In this paper, we will focus on four methods that require multiple steps and we will compare how effective each method is. 1
2 Contents 1 Introduction 3 2 Root Finding Methods The Bisection Method Fixed-Point Iteration Newton s Method Halley s Method Results 7 4 Conclusion 10 5 Bibliography 11 6 Appendix Bisection Method Matlab Code Fixed Point Iteration Matlab Code Newton s Method Matlab Code Halley s Method Matlab Code Maple Code
3 1 Introduction Our goal is to examine the convergence of various root-finding methods. Specifically, we will examine how x + ln(x) = 0, x > 0 converges using the Bisection Method, Fixed Point Iteration, Newton s Method, and Halley s Method. We need to ensure eight digits of accuracy so that: x c r < where r is the real root and x c is the estimated root. We will compare the rate of convergence for each method as well as compare the time and number of steps required to compute the approximate root. We will also discuss when it is appropriate to use each of the four methods. In order to know if we have reached our desired accuracy, we must first figure out the real root of the function. Using Maple 11 s built in function fsolve, we find the approximate root of x+ln(x) = 0 to be We will assume this to be the most accurate root and compare our findings to it. 2 Root Finding Methods 2.1 The Bisection Method The Bisection Method is a slow, but certain, way to find a root of a function if the root exists. We can find the general area of the root by bracketing it on the interval [a, b] R so that there exists a pair f(a), f(b) where f(a)f(b) < 0. In Sauer s Numerical Analysis, Theorem 1.2 states that if f is a continuous function on [a, b], satisfying f(a)f(b) < 0 then f has a root between a and b. This means there exists a number r R such that a < r < b and f(r) = 0. After each step in the bisection method, we can refine our guess to be the midpoint between the interval [a, b] around the root. We can check the sign of the function evaluated at this midpoint and then refine our bracket around the root further. This gives us a new interval that contains r. We continue this process until we are satisfied with our accuracy or we have reached a maximum number of iterations. Sauer also states that after n steps of the Bisection Method the error term is given by x c r < b a. (1) 2n+1 We can use equation (1) to determine the number of iterations required for a given accuracy and initial guesses a, b. Letting ɛ x c r, we see that a b 2 n+1 < ɛ 2 n > a b 2ɛ a b log( ɛ n > ) 1 (2) log(2) 3
4 From equation (2), we find that the number of iterations, n, depends on both our initial interval length and desired accuracy. 2.2 Fixed-Point Iteration The fixed-point method is a linearly convergent method. We define a fixed point of a function g if g(r) = r. Using the properties of exponents and logarithms we can rewrite the given problem as and define our fixed-point equation as x + ln(x) = 0 ln(x) = x x = e x g(x) = 1 e x. (3) The Fixed Point iterative method is given by x 0 = initial guess x i+1 = g(x i ) for i = 0, 1, 2... If g is a continuous function and the {x i } converge to a number r then r is a fixed point. So if g is continuous and the {x i } converge then g(r) = g( lim i + x i) = lim g(x i) = i + lim x i+1 = r. i + Sauer also states in theorem 1.6 that if S = g (r) < 1 then the fixed-point iterations converge linearly with rate S to the fixed point r for initial guesses sufficiently close to r. S = g (r) = 1 e r = (4) e We will be able to test this against our resulting error ratio e i+1 e i where e i = r x i in our convergence analysis. The fixed point iterative method forms a cobweb diagram geometrically. Figure 1 shows the cobweb diagram for g(x) = 1 e and the first few steps with an initial guess x x 0 = 0.4 With this initial guess, we find that we get closer to the fixed point after each step and spiral inward closer to the root. 4
5 Figure 1: The fixed point is the intersection of g(x) and the diagonal line. 2.3 Newton s Method Newton s Method requires just one initial guess. It is also a specific form of the fixed point method. The general method is x 0 = initial guess x i+1 = x i f(x i) f for i = 0, 1, 2... (x i ) It is important to note that the first derivative is required to compute the root using Newton s Method. This can slow the overall computation time if derivative is difficult to compute. The time required to calculate the root will be reduced if the derivative is known. Using Newton s Method we expect quadratic convergence when the initial guess is sufficiently close to the real root. At the very least, we expect faster convergence than the linearly convergent methods. Since f (x) = 1+x x, the formula for Newton s Method is given by x i+1 = x i x i + ln(x i ) x i + x 2. i 5
6 2.4 Halley s Method Halley s Method is a root finding method that converges cubically. In order to use Halley s method, the function must have a continuous second derivative. The general method is x 0 = initial guess x i+1 = x i 2f(x i )f (x i ) 2[f (x i )] 2 f(x i )f (2), for i = 0, 1, 2... (x i ) Similarly to Newton s Method and Fixed Point Iteration, Halley s Method involves starting with an initial guess close to the root and then substituting it into the equation to get the next guess, and then repeating until you are satisfied with the degree of accuracy. For the function f(x) = x+ln(x), the formula for Halleys method is 2(x i + ln(x i ))(1 + 1 x x i+1 = x i + ln(x i ) i ) 2(1 + 1 x i ) 2 + (x i + ln(x i ))( 1 ). x 2 i 6
7 3 Results From the Bisection method and from equation (2) we expect the number of steps, n, required to approximate the root to be n log( 1 ) log(2) We chose to bracket the root around [0.1, 1.1] as our initial guess. Using Matlab, we performed a convergence analysis and found that 27 steps were required to converge to the root, which was expected. The midpoint is the current step s best approximation to the root. Notice the initial approximation to the root (the first midpoint between [a, b] is already accurate to one decimal place. Step Left Guess Midpoint Right Guess Error Figure 2: Convergence table for the Bisection Method 7
8 Using the fixed-point equation g(x) = e x and the initial guess of 0.4, we performed a convergence analysis for our function f(x) = x + ln(x). We see that this takes 31 steps to converge at the rate S = This confirms what we found in equation (3). This implies a linear rate of convergence. We also note that each x i root approximation is a point given by the cobweb diagram in Figure (1) as it spirals closer to the root. i x i g(x i ) e i = x i -r e i /e i Figure 3: Convergence table for the Fixed-Point Method with fixed point g(x) = e x Another fixed point iteration is given as g 2 (x) = x2 log(x) x+1. We found that using an initial guess of 0.4 g 2 (x) converged in 64 steps as opposed to the 31 steps from g(x). The convergence analysis for g 2 (x) showed an error ratio S = This shows that g 2 (x) converges slower than g(x) which is also confirmed by the number of steps taken to converge. Other derivations of g(x) from f(x) = x + ln(x) may exist that give faster or slower convergence. In fact, Newton s Method is a specific form of fixed point iteration. 8
9 Newton s Method converges with much fewer steps than the Bisection Method and the Fixed Point Method. We found that it converged to our root quadratically in just 4 steps as seen in Figure 4. Figure 4: Convergence table for Newton s Method While Newton s Method takes considerably fewer steps than the Fixed Point method and the Bisection Method, we found that, on average, our fixed point method for g(x) took 0.02 seconds to converge. However, our Newton s Method took 0.03 seconds. We believe this is because the method takes time to calculate derivative f (x). So while Newton s Method converges in fewer steps, more time may be taken to determine the derivative of the function. When the derivative is known, Newton s Method converged in just seconds where the bisection method converged in seconds. Halley s Method converges cubically, requiring only three steps as seen in Figure 5. We find that since both the first and second derivative of f(x) is required that, while this method converges in very few steps, it takes, on average, seconds. This is longer than each of the other methods. However, when the derivatives are known, Halley s Method takes seconds which is slightly longer than when the derivatives were known for Newton s Method. This is most likely due to the added complexity of the iterative method with Halley s Method. Each step of Halley s Method requires more addition and multiplication computations to calculate the next x i+1. Figure 5: Convergence table for Halley s Method 9
10 4 Conclusion As we have seen, each method has its advantages and disadvantages. The Bisection Method requires previous knowledge of where the root is to bracket the root around an interval [a, b]. While, it takes fewer steps compute the root than the Fixed Point Iteration, our initial approximate root was also closer to the real root. With the Bisection Method we are also guaranteed convergence as long as the root exists within our interval and the function is continuous on the interval. Figure 6: Summary comparison of the number of steps and time required to compute the root for f(x) = x + ln(x) for each method. The Fixed Point method can be difficult to compute if the function g(x) is difficult to find. We saw that different functions g(x) of our given problem f(x) = x + ln(x) also converged at different rates. With the Fixed Point Iteration, not only is convergence not guaranteed, but it might also not be very fast. Newton s method was shown to be the best method for finding the root of f(x) = x + ln(x). It took more time to converge than the Bisection Method and Fixed Point Method when the derivative needed to be calculated, but when derivative is known it converged to the root in less time than each other method and with only one more step than the cubically converging Halley s Method. While Halley s method converged at the fastest rate with the least amount of steps, we found that the complexity behind the iterative method added more time to calculate the root. Even when the first and second derivative was known, it took more time to compute than Newton s Method as we see in figure 6. 10
11 5 Bibliography References [1] T. Sauer, Numerical Analysis: Second Edition, Pearson Education, 2012, pp [2] P. Acklam, A small paper on Halley s method, 23 December 2002, pjacklam/notes/halley/halley.pdf, 26 October
12 6 Appendix 6.1 Bisection Method Matlab Code 1 %B i s e c t i o n Method 2 3 function [ Z ] = b i s e c t i o n ( f, a, b, t o l, maxiter ) 4 5 %INPUTS 6 % I n l i n e f u n c t i o n f 7 % a w i l l g i v e us a n e g a t i v e r e s u l t when in f 8 % b w i l l g i v e us a p o s i t i v e r e s u l t when in f 9 % t o l i s the e r r o r we can be w i t h i n ( f o r 8 d i g i t accuracy ) %OUTPUTS 12 % Z i s a matric containng each guess and e r r o r %Real root o f t he s u p p l i e d f u n c t i o n 15 %f = x + l n ( x ) = 0 16 realroot = ; % Z c o n t a i n s the x a x i s v a l u e s to check f o r r o o t s 19 % l e n g t h i s max number o f i t e r a t i o n s 20 Z = zeros ( 1, 4 ) ; %I n i t i a l l e f t guess s t a r t i n g p o i n t 23 Z ( 1, 1 ) = a ; %F i r s t Midpoint v a l u e to check 26 Z ( 1, 2 ) = ( a+b ) / 2 ; %F i r s t r i g h t guess s t a r t i n g p o i n t 29 Z ( 1, 3 ) = b ; %I n i t i a l e r r o r 32 Z ( 1, 4 ) = abs (Z ( 1, 2 ) realroot ) ; %I f s t a r t i n g p o i n t are the same s i g n then break program 35 i f sign ( f (Z ( 1, 1 ) ) ) sign ( f (Z ( 1, 3 ) ) ) >= 0 36 error ( f ( a ) and f ( b ) are both the same s i g n ) 37 end %k i s the counter 40 k = 1 ; %End loop i f midpoint guess i s w i t h i n the t o l e r a n c e or we have reached the 43 %maximum number o f i t e r a t i o n s 12
13 44 while (Z( k, 4 ) > t o l && k < maxiter ) %I f the midpoint has the same s i g n as the l e f t p o i n t then t h a t 47 %midpoint becomes the new l e f t p o i n t 48 i f sign ( f (Z( k, 2 ) ) ) == sign ( f (Z( k, 1 ) ) ) 49 Z( k+1,1) = Z( k, 2 ) ; %S e t s l e f t p o i n t to the midpoint 50 Z( k+1,3) = Z( k, 3 ) ; %S e t s the r i g h t p o i n t to the o l d r i g h t p o i n t 51 else 52 Z( k+1,3) = Z( k, 2 ) ; %S e t s the r i g h t p o i n t to the midpoint 53 Z( k+1,1) = Z( k, 1 ) ; %S e t s the l e f t p o i n t to the o l d l e f t p o i n t 54 end %New Midpoint to check on x a x i s 57 Z( k+1,2) = (Z( k+1,1) + Z( k+1,3) ) / 2 ; %Error i s the d i s t a n c e from the c urrent midpoint guess and the r e a l r o ot 60 Z( k+1,4) = abs ( realroot Z( k+1,2) ) ; %Error r a t i o 63 %Z( k +1,5) = Z( k +1,4)/Z( k, 4 ) ; k = k+1; %I n c r e a s e counter by 1 66 end %Final Midpoint guess 69 xc = Z( k, 2 ) ; 13
14 6.2 Fixed Point Iteration Matlab Code 1 %Fixed Point I t e r a t i o n S o l v e r 2 3 function Z = f i x e d P o i n t ( g, a, t o l, maxiter ) 4 5 %INPUTS 6 % I n l i n e f u n c t i o n g 7 % a i s the i n i t i a l guess 8 % t o l i s the t o l e r a n c e we can be w i t h i n ( f o r 8 d i g accuracy 9 % maxiter i s t he maximum numbre o f i t e r a t i o n s a l l o w e d %OUTPUTS 12 %Z i s a matrix where : 13 %Z( x, 1 ) i s the x a x i s guess 14 %Z( x, 2 ) i s the r e s u l t i n g f u n c t i o n v a l u e 15 %Z( x, 3 ) i s the e r r o r d i s t a n c e between the guess and the r e a l root 16 %Z( x, 4 ) i s the e r r o r r a t i o %Real root o f t he s u p p l i e d f u n c t i o n 19 %f = x + l n ( x ) = 0 20 %g = ( x x l o g ( x ) ) / ( x+1) 21 %h = 1/ eˆx 22 realroot = ; %TEST ROOT FOR x + cos ( x ) s i n ( x ) 25 %realroot = ; %I n i t i a l i z e Z 28 Z = zeros ( 1, 4 ) ; %S t a r t the Counter 31 k = 1 ; %The i n i t i a l guess 34 Z ( 1, 1 ) = a ; %The i n i t i a l e v a l u a t i o n at our guess 37 Z ( 1, 2 ) = g (Z ( 1, 1 ) ) ; %I n i t i a l e r r o r 40 Z ( 1, 3 ) = abs (Z ( 1, 1 ) realroot ) ; %There i s no i n i t i a l e r r o r r a t i o while (Z( k, 3 ) > t o l && Z( k, 3 ) = 0 ) %Needs work when e r r o r = i f isnan (Z( k, 1 ) ) isnan (Z( k, 2 ) ) isnan (Z( k, 3 ) ) 14
15 47 error ( Fixed Point does not converge ) 48 end %Break i f max number o f i t e r a t i o n s reached 51 i f k > maxiter 52 break 53 end k = k + 1 ; %Increment the count %Z( k, 1 ) = 1 ; %Update t he count p l a c e h o l d e r 58 Z( k, 1 ) = g (Z( k 1,1) ) ; %Create next guess 59 Z( k, 2 ) = g (Z( k, 1 ) ) ; %Evaluate the newest guess 60 Z( k, 3 ) = abs (Z( k, 1 ) realroot ) ; %Determine the e r r o r 61 Z( k, 4 ) = Z( k, 3 ) /Z( k 1,3) ; %Determine the e r r o r r a t i o %I f Error r a t i o i s g r e a t e r than 1 i t w i l l not converge 64 i f Z( k, 3 ) > Z( k 1,3) 2 && k > 2 65 error ( Fixed Point does not converge ) %ends program 66 end 67 end 15
16 6.3 Newton s Method Matlab Code 1 %Newtons Method 2 3 function Z = newtons ( f, a, t o l, maxiter, var ) 4 5 %INPUTS 6 % NOT INLINE f u n c t i o n f 7 % a i s the i n i t i a l guess 8 % t o l i s the t o l e r a n c e we can be w i t h i n ( f o r 8 d i g accuracy 9 % maxiter i s t he maximum number o f i t e r a t i o n s a l l o w e d 10 % var i s the v a r i a b l e s u p p l i e d in the i n l i n e f u n c t i o n %OUPUTS 13 % Z i s a matrix c o n s i s t i n g o f : 14 %Z( x, 1 ) i s the x a x i s guess 15 %Z( x, 2 ) i s the r e s u l t i n g f u n c t i o n v a l u e 16 %Z( x, 3 ) i s the e r r o r r a t i o %Make the v a r i a b l e used a symbol 19 syms ( var, r e a l ) ; 20 t i c 21 f D i f f = d i f f ( f ) ; %Take f i r s t d e r i v a t i v e o f f 22 f D i f f = i n l i n e ( f D i f f ) ; %Make f i r s t d e r i v a t i v e an i n l i n e f u n c t i o n 23 f I n l i n e = i n l i n e ( f ) ; %Make s u p p l i e d f u n c t i o n i n l i n e %Real root o f t he s u p p l i e d f u n c t i o n 26 %f = x + l n ( x ) = 0 27 %D e r i v a t i v e o f f = 1 + 1/ x 28 realroot = ; %TEST ROOT FOR x + cos ( x ) s i n ( x ) 31 %realroot = ; %I n i t i a l i z e Z 34 Z = zeros ( 1, 3 ) ; %S t a r t the Counter 37 k = 1 ; %The i n i t i a l guess 40 Z ( 1, 1 ) = a ; %I n i t i a l e r r o r 43 Z ( 1, 2 ) = abs (Z ( 1, 1 ) realroot ) ; %No I n i t i t a l e r r o r r a t i o 46 16
17 47 while (Z( k, 2 ) > t o l && k < maxiter && Z( k, 2 ) = 0) %Create the next guess 50 Z( k+1,1) = Z( k, 1 ) f I n l i n e (Z( k, 1 ) ) / f D i f f (Z( k, 1 ) ) ; %Deterimne e r r o r o f the newly c r e a t e d guess 53 Z( k+1,2) = abs (Z( k+1,1) realroot ) ; %Determine the e r r o r r a t i o 56 Z( k+1,3) = Z( k+1,2) /Z( k, 2 ) ˆ 2 ; %Determine the e r r o r r a t i o %Increase the counter 59 k = k+1; end 17
18 6.4 Halley s Method Matlab Code 1 %Halley s Method 2 3 function Z = h a l l e y ( f, a, t o l, maxiter, var ) 4 5 %INPUTS 6 % NOT INLINE f u n c t i o n f 7 % a i s the i n i t i a l guess 8 % t o l i s the t o l e r a n c e we can be w i t h i n ( f o r 8 d i g accuracy 9 % maxiter i s t he maximum number o f i t e r a t i o n s a l l o w e d 10 % var i s the v a r i a b l e s u p p l i e d in the i n l i n e f u n c t i o n %Make the v a r i a b l e used a symbol 13 syms ( var, r e a l ) ; f i r s t D = d i f f ( f ) ; %F i r s t d e r i v a t i v e o f f 16 secondd = d i f f ( f i r s t D ) ; %Second d e r i v a t i v e o f f f i r s t D = i n l i n e ( f i r s t D ) ; %F i r s t D e r i v a t i v e o f s u p p l i e d function, i n l i n e 19 secondd = i n l i n e ( secondd ) ; %Second D e r i v a t i v e o f s u p p l i e d function, i n l i n e 20 f = i n l i n e ( f ) ; %The s u p p l i e d f u n c t i o n covnerted to i n l i n e %Real root o f t he s u p p l i e d f u n c t i o n 23 %f = x + l n ( x ) = 0 24 %D e r i v a t i v e o f f = 1 + 1/ x 25 %Second D e r i v a t i v e o f f = 1/x ˆ2 26 realroot = ; %I n i t i a l i z e Z 29 Z = zeros ( 1, 3 ) ; %S t a r t the Counter 32 k = 1 ; %The i n i t i a l guess 35 Z ( 1, 1 ) = a ; %I n i t i a l e r r o r 38 Z ( 1, 2 ) = abs (Z ( 1, 1 ) realroot ) ; %No I n i t i t a l e r r o r r a t i o while (Z( k, 2 ) > t o l && k < maxiter && Z( k, 2 ) = 0) %Create the next guess 18
19 45 Z( k+1,1) = Z( k, 1 ) [ 2 f (Z( k, 1 ) ) f i r s t D (Z( k, 1 ) ) ] / [ ( 2 ( f i r s t D (Z( k, 1 ) ) ) ˆ2) f (Z( k, 1 ) ) secondd (Z( k, 1 ) ) ] ; %Deterimne e r r o r o f the newly c r e a t e d guess 48 Z( k+1,2) = abs (Z( k+1,1) realroot ) ; %Determine the e r r o r r a t i o 51 Z( k+1,3) = Z( k+1,2) /Z( k, 2 ) ˆ 3 ; %Determine the e r r o r r a t i o %Increase the counter 54 k = k+1; end 19
20 6.5 Maple Code func := x + ln(x) fsolve(func) =
3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.
3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)
More informationHalley s Method: A Cubically Converging. Method of Root Approximation
Halley s Method: A Cubically Converging Method of Root Approximation Gabriel Kramer Brittany Sypin December 3, 2011 Abstract This paper will discuss numerical ways of approximating the solutions to the
More informationChapter 3: Root Finding. September 26, 2005
Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division
More informationOutline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,
Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research
More informationMath Numerical Analysis
Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center
More informationMATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.
MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of f(x) = 0, their advantages and disadvantages. Convergence
More informationSOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD
BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative
More informationChapter 1. Root Finding Methods. 1.1 Bisection method
Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This
More informationNonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems
Nonlinear Systems CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Nonlinear Systems 1 / 27 Part III: Nonlinear Problems Not
More informationLecture 7. Root finding I. 1 Introduction. 2 Graphical solution
1 Introduction Lecture 7 Root finding I For our present purposes, root finding is the process of finding a real value of x which solves the equation f (x)=0. Since the equation g x =h x can be rewritten
More informationNumerical Methods. Root Finding
Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real
More informationSolution of Algebric & Transcendental Equations
Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection
More informationNonlinear Equations. Chapter The Bisection Method
Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y
More informationLecture 8. Root finding II
1 Introduction Lecture 8 Root finding II In the previous lecture we considered the bisection root-bracketing algorithm. It requires only that the function be continuous and that we have a root bracketed
More informationNumerical Methods in Physics and Astrophysics
Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation
More informationNumerical Methods in Physics and Astrophysics
Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation
More informationSolution of Nonlinear Equations
Solution of Nonlinear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 14, 017 One of the most frequently occurring problems in scientific work is to find the roots of equations of the form f(x) = 0. (1)
More informationRoots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations
Roots of Equations Direct Search, Bisection Methods Regula Falsi, Secant Methods Newton-Raphson Method Zeros of Polynomials (Horner s, Muller s methods) EigenValue Analysis ITCS 4133/5133: Introduction
More informationPART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435
PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu
More informationCHAPTER-II ROOTS OF EQUATIONS
CHAPTER-II ROOTS OF EQUATIONS 2.1 Introduction The roots or zeros of equations can be simply defined as the values of x that makes f(x) =0. There are many ways to solve for roots of equations. For some
More informationTHE SECANT METHOD. q(x) = a 0 + a 1 x. with
THE SECANT METHOD Newton s method was based on using the line tangent to the curve of y = f (x), with the point of tangency (x 0, f (x 0 )). When x 0 α, the graph of the tangent line is approximately the
More informationCS 221 Lecture 9. Tuesday, 1 November 2011
CS 221 Lecture 9 Tuesday, 1 November 2011 Some slides in this lecture are from the publisher s slides for Engineering Computation: An Introduction Using MATLAB and Excel 2009 McGraw-Hill Today s Agenda
More information1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that
Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a
More informationSolving Non-Linear Equations (Root Finding)
Solving Non-Linear Equations (Root Finding) Root finding Methods What are root finding methods? Methods for determining a solution of an equation. Essentially finding a root of a function, that is, a zero
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationComputational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions
Computational Methods CMSC/AMSC/MAPL 460 Solving nonlinear equations and zero finding Ramani Duraiswami, Dept. of Computer Science Where does it arise? Finding zeroes of functions Solving functional equations
More informationRoot Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014
Root Finding: Close Methods Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Roots Given function f(x), we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the
More informationGoals for This Lecture:
Goals for This Lecture: Learn the Newton-Raphson method for finding real roots of real functions Learn the Bisection method for finding real roots of a real function Look at efficient implementations of
More informationNonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems
Nonlinear Systems CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Nonlinear Systems 1 / 24 Part III: Nonlinear Problems Not all numerical problems
More informationMidterm Review. Igor Yanovsky (Math 151A TA)
Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More informationJim Lambers MAT 460/560 Fall Semester Practice Final Exam
Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding
More informationSTOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1
53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas
More informationDepartment of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004
Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question
More informationROOT FINDING REVIEW MICHELLE FENG
ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one
More informationBisection and False Position Dr. Marco A. Arocha Aug, 2014
Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Given function f, we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the function f Problem is known as root
More informationUnit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright
cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now
More informationNumerical Study of Some Iterative Methods for Solving Nonlinear Equations
International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 5 Issue 2 February 2016 PP.0110 Numerical Study of Some Iterative Methods for Solving Nonlinear
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This
More informationCLASS NOTES Models, Algorithms and Data: Introduction to computing 2018
CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: April 16, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationNotes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)
Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence
More information5.6 Logarithmic and Exponential Equations
SECTION 5.6 Logarithmic and Exponential Equations 305 5.6 Logarithmic and Exponential Equations PREPARING FOR THIS SECTION Before getting started, review the following: Solving Equations Using a Graphing
More informationNumerical Methods in Informatics
Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch
More informationComputational Methods. Solving Equations
Computational Methods Solving Equations Manfred Huber 2010 1 Solving Equations Solving scalar equations is an elemental task that arises in a wide range of applications Corresponds to finding parameters
More informationMATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.
MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.
More informationFigure 1: Graph of y = x cos(x)
Chapter The Solution of Nonlinear Equations f(x) = 0 In this chapter we will study methods for find the solutions of functions of single variables, ie values of x such that f(x) = 0 For example, f(x) =
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational
More informationNumerical Methods Lecture 3
Numerical Methods Lecture 3 Nonlinear Equations by Pavel Ludvík Introduction Definition (Root or zero of a function) A root (or a zero) of a function f is a solution of an equation f (x) = 0. We learn
More informationMath Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,
Math 371 - Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, 2011. Due: Thursday, January 27, 2011.. Include a cover page. You do not need to hand in a problem sheet.
More informationOptimization and Calculus
Optimization and Calculus To begin, there is a close relationship between finding the roots to a function and optimizing a function. In the former case, we solve for x. In the latter, we solve: g(x) =
More informationf(x) = 2x + 5 3x 1. f 1 (x) = x + 5 3x 2. f(x) = 102x x
1. Let f(x) = x 3 + 7x 2 x 2. Use the fact that f( 1) = 0 to factor f completely. (2x-1)(3x+2)(x+1). 2. Find x if log 2 x = 5. x = 1/32 3. Find the vertex of the parabola given by f(x) = 2x 2 + 3x 4. (Give
More informationFinding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University
Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationFinding the Roots of f(x) = 0
Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationA Few Concepts from Numerical Analysis
2 A Few Concepts from Numerical Analysis A systematic treatment of numerical methods is provided in conventional courses and textbooks on numerical analysis. But a few very common issues, that emerge in
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationChapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation
Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Mathematics National Institute of Technology Durgapur Durgapur-713209 email: anita.buie@gmail.com 1 . Chapter 4 Solution of Non-linear
More informationx 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.
Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written
More informationIntroductory Numerical Analysis
Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection
More informationSimple Iteration, cont d
Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence
More informationMath Numerical Analysis Mid-Term Test Solutions
Math 400 - Numerical Analysis Mid-Term Test Solutions. Short Answers (a) A sufficient and necessary condition for the bisection method to find a root of f(x) on the interval [a,b] is f(a)f(b) < 0 or f(a)
More informationMotivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)
AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.
More information15 Nonlinear Equations and Zero-Finders
15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).
More informationChapter 2 Solutions of Equations of One Variable
Chapter 2 Solutions of Equations of One Variable 2.1 Bisection Method In this chapter we consider one of the most basic problems of numerical approximation, the root-finding problem. This process involves
More informationSkill 6 Exponential and Logarithmic Functions
Skill 6 Exponential and Logarithmic Functions Skill 6a: Graphs of Exponential Functions Skill 6b: Solving Exponential Equations (not requiring logarithms) Skill 6c: Definition of Logarithms Skill 6d: Graphs
More informationMath 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD
Mathematical question we are interested in numerically answering How to find the x-intercepts of a function f (x)? These x-intercepts are called the roots of the equation f (x) = 0. Notation: denote the
More informationDetermining the Roots of Non-Linear Equations Part I
Determining the Roots of Non-Linear Equations Part I Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More informationNumerical Solution of f(x) = 0
Numerical Solution of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@pdx.edu ME 350: Finding roots of f(x) = 0 Overview Topics covered in these slides
More informationNON-LINEAR ALGEBRAIC EQUATIONS Lec. 5.1: Nonlinear Equation in Single Variable
NON-LINEAR ALGEBRAIC EQUATIONS Lec. 5.1: Nonlinear Equation in Single Variable Dr. Niket Kaisare Department of Chemical Engineering IIT Madras NPTEL Course: MATLAB Programming for Numerical Computations
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationScientific Computing. Roots of Equations
ECE257 Numerical Methods and Scientific Computing Roots of Equations Today s s class: Roots of Equations Bracketing Methods Roots of Equations Given a function f(x), the roots are those values of x that
More informationCHAPTER 10 Zeros of Functions
CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems
More informationNumerical Methods 5633
Numerical Methods 5633 Lecture 4 Michaelmas Term 2017 Marina Krstic Marinkovic mmarina@maths.tcd.ie School of Mathematics Trinity College Dublin Marina Krstic Marinkovic 1 / 17 5633-Numerical Methods Root
More informationNumerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods
Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods So welcome to the next lecture of the 2 nd unit of this
More informationp 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.
80 CHAP. 2 SOLUTION OF NONLINEAR EQUATIONS f (x) = 0 y y = f(x) (p, 0) p 2 p 1 p 0 x (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- Figure 2.16 cant method. Secant Method The
More informationZeros of Functions. Chapter 10
Chapter 10 Zeros of Functions An important part of the mathematics syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of
More informationToday s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn
Today s class Numerical differentiation Roots of equation Bracketing methods 1 Numerical Differentiation Finite divided difference First forward difference First backward difference Lecture 3 2 Numerical
More informationHomework 2. Matthew Jin. April 10, 2014
Homework Matthew Jin April 10, 014 1a) The relative error is given by ŷ y y, where ŷ represents the observed output value, and y represents the theoretical output value. In this case, the observed output
More informationPractical Numerical Analysis: Sheet 3 Solutions
Practical Numerical Analysis: Sheet 3 Solutions 1. We need to compute the roots of the function defined by f(x) = sin(x) + sin(x 2 ) on the interval [0, 3] using different numerical methods. First we consider
More informationMath 128A: Homework 2 Solutions
Math 128A: Homework 2 Solutions Due: June 28 1. In problems where high precision is not needed, the IEEE standard provides a specification for single precision numbers, which occupy 32 bits of storage.
More information1.1: The bisection method. September 2017
(1/11) 1.1: The bisection method Solving nonlinear equations MA385/530 Numerical Analysis September 2017 3 2 f(x)= x 2 2 x axis 1 0 1 x [0] =a x [2] =1 x [3] =1.5 x [1] =b 2 0.5 0 0.5 1 1.5 2 2.5 1 Solving
More informationLine Search Methods. Shefali Kulkarni-Thaker
1 BISECTION METHOD Line Search Methods Shefali Kulkarni-Thaker Consider the following unconstrained optimization problem min f(x) x R Any optimization algorithm starts by an initial point x 0 and performs
More informationNUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.
NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.
More informationMAE 107 Homework 7 Solutions
MAE 107 Homework 7 Solutions 1. Tridiagonal system for u xx (x) + u(x) = 1 + sin(πx/4), for x [0, ], where u(0) =, u() = 4, with step size h = 1/. The total number of segments are: n = 0 1/ =. The nodes
More informationMath 56 Homework 1 Michael Downs. ne n 10 + ne n (1)
. Problem (a) Yes. The following equation: ne n + ne n () holds for all n R but, since we re only concerned with the asymptotic behavior as n, let us only consider n >. Dividing both sides by n( + ne n
More informationSolving nonlinear equations
Chapter Solving nonlinear equations Bisection Introduction Linear equations are of the form: find x such that ax + b = 0 Proposition Let f be a real-valued function that is defined and continuous on a
More informationRoot Finding For NonLinear Equations Bisection Method
Root Finding For NonLinear Equations Bisection Method P. Sam Johnson November 17, 2014 P. Sam Johnson (NITK) Root Finding For NonLinear Equations Bisection MethodNovember 17, 2014 1 / 26 Introduction The
More informationRoot finding. Root finding problem. Root finding problem. Root finding problem. Notes. Eugeniy E. Mikhailov. Lecture 05. Notes
Root finding Eugeniy E. Mikhailov The College of William & Mary Lecture 05 Eugeniy Mikhailov (W&M) Practical Computing Lecture 05 1 / 10 2 sin(x) 1 = 0 2 sin(x) 1 = 0 Often we have a problem which looks
More informationOutline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations
Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign
More informationPlease print the following information in case your scan sheet is misplaced:
MATH 1100 Common Final Exam FALL 010 December 10, 010 Please print the following information in case your scan sheet is misplaced: Name: Instructor: Student ID: Section/Time: The exam consists of 40 multiple
More informationHence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.
The Bisection method or BOLZANO s method or Interval halving method: Find the positive root of x 3 x = 1 correct to four decimal places by bisection method Let f x = x 3 x 1 Here f 0 = 1 = ve, f 1 = ve,
More informationA Review of Bracketing Methods for Finding Zeros of Nonlinear Functions
Applied Mathematical Sciences, Vol 1, 018, no 3, 137-146 HIKARI Ltd, wwwm-hikaricom https://doiorg/101988/ams018811 A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions Somkid Intep
More informationAPPLICATIONS OF DIFFERENTIATION
4 APPLICATIONS OF DIFFERENTIATION APPLICATIONS OF DIFFERENTIATION 4.8 Newton s Method In this section, we will learn: How to solve high degree equations using Newton s method. INTRODUCTION Suppose that
More informationKevin Mitchell Assignment #2 Solutions MACM 316
Machine arithmatic. BF, p29 #25 The binomial coefficient ( m m! = k k!(m k! describes the number of ways of choosing a subset of k objects from a set of m elements... ( point Suppose decimal machine numbers
More informationFIXED POINT ITERATION
FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial
More informationMAE 107 Homework 8 Solutions
MAE 107 Homework 8 Solutions 1. Newton s method to solve 3exp( x) = 2x starting at x 0 = 11. With chosen f(x), indicate x n, f(x n ), and f (x n ) at each step stopping at the first n such that f(x n )
More informationSkill 6 Exponential and Logarithmic Functions
Skill 6 Exponential and Logarithmic Functions Skill 6a: Graphs of Exponential Functions Skill 6b: Solving Exponential Equations (not requiring logarithms) Skill 6c: Definition of Logarithms Skill 6d: Graphs
More informationRoot Finding (and Optimisation)
Root Finding (and Optimisation) M.Sc. in Mathematical Modelling & Scientific Computing, Practical Numerical Analysis Michaelmas Term 2018, Lecture 4 Root Finding The idea of root finding is simple we want
More informationNumerical Optimization
Unconstrained Optimization (II) Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Unconstrained Optimization Let f : R R Unconstrained problem min x
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More information