NEWTON's Method in Comparison with the Fixed Point Iteration

Size: px
Start display at page:

Download "NEWTON's Method in Comparison with the Fixed Point Iteration"

Transcription

1 NEWTON's Method in Comparison with the Fixed Point Iteration Univ.-Prof. Dr.-Ing. habil. Josef BETTEN RWTH Aachen University Mathematical Models in Materials Science and Continuum Mechanics Augustinerbach 4-20 D A a c h e n, Germany <betten@mmw.rwth-aachen.de Abstract This worksheet is concerned with finding numerical solutions of non-linear equations in a single unknown. Using MAPLE 2 NEWTON's method has been compared with the fixed-point iteration. Some examples have been discussed in more detail. Keywords: NEWTON's method; zero form and fixed point form; BANACH's fixed-point theorem; convergence order Convergence Order A sequence with a high order of convergence converges more rapidly than a sequence with a lower order. In this worksheet we will see that NEWTON's method is quadratically convergent, while the fixed point iteration converges linearly to a fixed point. Some examples illustrade the convergence of both iterations in the following. Before several examples are discussed in more detail, let us list some definitions in the following. A value x = p is called a fixed point for a given function g(x) if g(p) = p. In finding the solution x = p for f(x) = 0 one can define functions g(x) with a fixed point at x = p in several ways, for example, as g(x) = x - f(x) or as g(x) = x - h(x)*f(x), where h(x) is a continuous function not equal to zero within an interval [a, b] considered. The iteration process is expressed by restart: x[n+]:=g(x[n]); # n = 0,,2,... x n + := g( x n ) with a selected starting value for n = 0 in the neighbourhood of the expected fixed point x = p. An unique solution f(x) = 0 exists, if BANACH's fixed-point theorem is fulfilled: Let g(x) be a continuous function in [a, b]. Assume, in addition, that g'(x) exists on (a, b) and that a constant L = [0, ) exists with restart:

2 abs(diff(g(x),x))<=l; d dx g( x) L for all x in [a,b]. Then, for any selected initial value in [a, b] the sequence defined by x[n+]:=g(x[n]); # n = 0,,2,... x n + := g( x n ) converges to the unique fixed-point x = p in [a, b]. The constant L is known as LIPPSCHITZ constant. Based upon the mean value theorem we arrive from the above assumption at abs(g(x)-g(xi))<=l*abs(x-xi); g( x ) g( ξ ) L x ξ for all x and xi in [a, b]. The BANACH fixed-point theorem is sometimes called the contraction mapping principle. A sequence converges to p of order alpha, if restart: Limit(abs(x[n+]-p)/abs(x[n]-p)^alpha,n=infinity)=epsilon; x n + + p lim = ε n x n + p α with an asymptotic error constant epsilon. Another definition of the convergence order is given by: abs(x[n+]-p)<=c*abs(x[n]-p)^alpha; x n + + p C x n + where C is a constant. The fixed-point iteration converges linearly (alpha = ) with a constant C = (0, ). This can be shown, for example, as follows: g(x)=g(x[n])+(x-x[n])*diff(g(x),x)[x=x[n]]+ (x-x[n])^2*diff(g(x),x$2)[x=xi]/2; # TAYLOR d g( x ) = g( x n ) + ( x x n ) + dx g( x ) 2 ( x x ) 2 d 2 n g( x ) dx 2 x = x n x = ξ g(x):=g(x[n])+(x-x[n])*`g'`(x[n])+(x-x[n])^2*`g''`(xi)/2; g( x ) := g( x n ) + ( x x n ) g' ( x n ) + 2 ( x x n ) 2 g'' ( ξ ) where xi in the remainder term lies between x and x[n]. For x = p we arrive at g(p):=g(x[n])+(p-x[n])*`g'`(x[n])+(p-x[n])^2*`g''`(xi)/2; g( p ):= g( x n ) + ( x n + p ) g' ( x n ) + 2 ( x n + p ) 2 g'' ( ξ ) The 3rd term on the right hand side may be neglected, because x[n] is an approximation near to the fixed-point, (p - x[n])^2 << (p - x[n]), hence: g(p):=g(x[n])+(p-x[n])*`g'`(x[n]); g( p ):= g( x n ) + ( x n + p ) g' ( x n ) Corresponding with the fixed-point iteration g(p) = p and g(x[n]) = x[n+] we arrive from p α 2

3 this equation at: abs(x[n+]-p)=abs(`g'`(x[n]))*abs(x[n]-p); x n + + p = g' ( x n ) x n + p where abs(`g'`(x[n]))<=l; g' ( x n ) L abs(x[n+]-p)<=l*abs(x[n]-p); x n + + p L x n + p Thus, the fixed-point iteration converges linearly to the fixed-point p. In contrast, NEWTON's method is quadratically convergent. This can be shown, for example, as follows. G(x)=G(p)+(x-p)*Diff(G(x),x)[x=p]+ (x-p)^2*diff(g(x),x$2)[x=xi]/2; G( x ) = G( p ) + ( x p) + d G( x ) dx 2 ( x p) 2 d 2 G( x ) dx 2 x= p x = ξ G(x):=G(p)+(x-p)*`G'`(p)+(x-p)^2*`G''`(xi)/2; G( x ) := G( p ) + ( x p ) G' ( p) + 2 ( x p) 2 G'' ( ξ ) where xi lies between x and p. For x = x[n] we have: G(x[n])=G(p)+(x[n]-p)*`G'`(p)+ (x[n]-p)^2*`g''`(zeta)/2; G( x n ) = G( p ) + ( x n p ) G' ( p) + 2 ( x n p ) 2 G'' ( ζ ) where zeta lies between x[n] and p. Because of G(x[n]) = x[n+] and G(p) = p we get: x[n+]-p=(x[n]-p)*`g'`(p)+(x[n]-p)^2*`g''`(zeta)/2; x n + p = ( x n p ) G' ( p) + 2 ( x n p ) 2 G'' ( ζ ) The first term on the right hand side must be equal to zero, if the iteration x[n+] = G(x[n]) should converge quadratically to the fixed-point p : `G'`(p):=0; G' ( p) := 0 x[n+]-p=`g''`(zeta)*(x[n]-p)^2; x n + p = ( x n p) 2 G'' ( ζ ) NEWTON's method has the convergence order alpha = G'(p) = 0. Hence: restart: G(x):=x-h(x)*f(x); # iteration function G( x ) := x h( x ) f( x ) `G'`(x):=diff(G(x),x); d G' ( x ) := dx h( x ) f( x ) h( x ) d dx f( x ) `G'`(x):=-`h'`(x)*f(x)-h(x)*`f '`(x); G' ( x ):= h' ( x ) f( x ) h( x ) f '( x ) 3

4 `G'`(p):=subs(x=p,%); G' ( p ) := h' ( p ) f( p ) h( p ) f '( p) At the fixed-point p we have G'(p) = 0 and f(p) = 0. Hence: h(p):=/`f '`(p); h(x):=/`f '`(x); G(x):=x-f(x)/`f '`(x); h( p) h( x ) := f '( p) := f '( x ) f( x ) G( x ) := x f '( x ) Because of x[n+] = G(x[n] we find NEWTON's iteration method: x[n+]:=x[n]-f(x[n])/`f '`(x[n]); f( x n ) x n + := x n f '( x n ) NEWTON's iteration method has the disadvantage that it cannot be continued, if f '(x[n]) is equal to zero for any step x[n]. However, the method is most effective, if f '(x[n]) is bounded away from zero near the fixed-point p. Note, in cases when there is a point of inflection or a horizontal tangent to the function f(x) in the vicinity of the fixed-point, the sequence x[n+] = G(x[n]) need not converge to the fixed-point. Thus, before applying NEWTON 's method, one should investigate the behaviour of the derivatives f '(x) and f ''(x) in the neighbourhood of the expected fixed-point. Instead of the analytical derivation of NEWTON's method one can find the approximations x[n] to the fixed-point p by using tangents to the graph of the given function f(x) with f(p) = 0. Beginning with x[0] we obtain the first approximation x[] as the x-intersept of the tangent line to the graph of f(x) at (x[0], f(x[0])). The next approximation x[2] is the x-intercept of the tangent line to the graph of f(x) at (x[], f(x[])) and so on. Following this procedure we arrive at NEWTON's iteration characterized by the iteration function G(x) defined before. The derivative of this function is given by: restart: Diff(G(x),x)= simplify(-((diff(f(x),x))^2-f(x)*diff(f(x),x$2))/ Diff(f(x),x)^2); d f ( x ) d 2 G( x ) = dx f( x ) 2 dx 2 d dx f( x ) `G'`(x):=f(x)*`f ''`(x)/(`f '`(x))^2; f( x ) f ''( x ) G' ( x ) := f '( x ) 2 At the fixed-point the given function has a zero, f(p) = 0. Hence: 4

5 `G'`(p):=Diff(G(x),x)[x=p]=0; d G' ( p ) := G( x ) = 0 dx x= p Résumé: NEWTON's method converges optimal ( G'(p) = 0 ) to the fixed-point p unless f '(x) = 0 for some x[n]. The derivative G'(p) = 0 implies quadratic convergence. Examples The first example is concerned with the root-finding problem f(x) = x - cos(x) = 0 by using the iteration functions g(x) und G(x) characterized by linear and quadratic convergence, respectively. restart: f(x):=x-cos(x); f( x ):= x cos( x ) p:=fsolve(f(x)=0,x); # MAPLE solution by the command "fsolve" p := `f '`(x):=diff(f(x),x); f '( x ) := + sin( x ) `f ''`(x):=diff(f(x),x$2); f ''( x ):= cos( x ) alias(h=heaviside,th=thickness,co=color): p[]:=plot({f(x),+sin(x),cos(x)},x=0..pi/2,-..2, th=3,co=black): p[2]:=plot({2*h(x-.57),-h(x-.57)},x= ,co=black, title="f(x), f'(x), f''(x), Fixed Point p = 0.739"): p[3]:=plot({2,-},x=0..pi/2,co=black): p[4]:=plot(.674*h(x-0.74),x= , linestyle=4,co=black): p[5]:=plot([[0.74,.674]],style=point, symbol=circle,symbolsize=30,co=black): p[6]:=plots[textplot]({[.5,,`f(x)`],[0.2,.4,`f'(x)`], [0.2,0.8,`f''(x)`]},co=black): plots[display](seq(p[k],k=..6)); 5

6 In this Figure we see that the first derivative f '(x) is not equal to zero in the entire range considered, which is an essential condition for the application of NEWTON's method. The iteration functions are given by g(x):=x-f(x); G(x):=x-f(x)/diff(f(x),x); g( x ) := cos( x ) G( x ):= x x cos( x ) + sin( x ) alias(h=heaviside,sc=scaling,th=thickness,co=color): p[]:=plot({x,g(x),g(x)},x=0..,0.., sc=constrained,th=3,co=black): p[2]:=plot(h(x-),x= ,co=black): p[3]:=plot(,x=0..,co=black, title="iteration Functions g(x) and G(x)"): p[4]:=plot([[0.739,0.739]],style=point, symbol=circle,symbolsize=30,co=black): p[5]:=plots[textplot]({[0.5,0.8,`g(x)`], [0.6,0.92,`g(x)`]},co=black): p[6]:=plot(0.739*h(x-0.739),x= , linestyle=4,co=black): plots[display](seq(p[k],k=..6)); 6

7 Both operators, G and g, mapp the interval x = [0, ] to itself. The iteration function G(x) has a horizontal tangent in x = p because of G'(p) = 0, id est: quadratic convergence. Now, let's discuss the absolute derivatives of the iteration functions. abs(`g'`(x))=abs(diff(g(x),x)); g' ( x ) = sin( x ) abs(`g'`(x))=abs(diff(g(x),x)); ( x cos( x )) cos( x ) G' ( x ) = 2 ( + sin( x )) p[]:=plot({rhs(%%),rhs(%)},x=0..,0.., sc=constrained,th=3,co=black): p[2]:=plot(0.67*h(x-0.74),x= ,linestyle=4,co=black, title="absolute Derivatives G'(x) and g'(x) "): p[3]:=plot([[0.74,0.67]],style=point, symbol=circle,symbolsize=30,co=black): p[4]:=plot(h(x-),x= ,co=black): p[5]:=plot(,x=0..,co=black): p[6]:=plots[textplot]({[0.6,0.9,` G'(x) `], [0.85,0.9,` g'(x) `]},co=black): plots[display](seq(p[k],k=..6)); 7

8 This Figure illustrades that both derivatives exist on (0, ) with g'(x) < L and G'(x) < K for all x = [0, ], where K < and L <. Considering the last two Figures, we establish that both iteration functions are compatible with BANACH's fixed-point theorem. The iterations generated by g(x) and G(x) are given as follows: x[0]:=0.7; x[]:=evalf(subs(x=0.7,g(x))); x 0 := 0.7 x := Fixed Point Iteration: for i from 2 to 25 do x[i]:=evalf(subs(x=%,g(x))) od; x 2 := x 3 := x 4 := x 5 := x 6 := x 7 := x 8 := x 9 := x 0 := x := x 2 := x 3 := x 4 := x 5 := x 6 := x 7 :=

9 x 8 := x 9 := x 20 := x 2 := x 22 := x 23 := x 24 := x 25 := The 25th iteration x[25] is nearly identical to the MAPLE solution p = NEWTON's Method: X[0]:=0.7; X[]:=evalf(subs(x=%,G(x))); X 0 := 0.7 X := for i from 2 to 5 do X[i]:=evalf(subs(x=%,G(x))) od; X 2 := X 3 := X 4 := X 5 := The 3rd iteration is already identical to the MAPLE solution. In contrast, the fixed-point method needs about 25 iterations. The NEWTON method converges quadratically, while the fixed point sequence is linear convergent. Similar to the first example, the next one is concerned with solving the problem f(x) = 0, where restart: f(x):=x-(sin(x)+cos(x))/2; f( x ) := x sin( x ) cos( x ) 2 2 p:=fsolve(f(x)=0,x); # fixed point by MAPLE p := `f '`(x):=diff(f(x),x); `f ''`(x):=diff(f(x),x$2); f '( x ) := cos( x ) + sin( x ) 2 2 f ''( x ) := sin( x ) + cos( x ) 2 2 alias(h=heaviside,th=thickness,co=color): 9

10 p[]:=plot({f(x),diff(f(x),x),diff(f(x),x$2)}, x=0..pi/2,th=3,co=black,scaling=constrained): p[2]:=plot({.57*h(x-.57),-0.5*h(x-.57)}, x= ,co=black): p[3]:=plot(0.943*h(x-0.705),x= , linestyle=4,co=black): p[4]:=plot({-0.5,pi/2},x=0..pi/2,co=black, title="f(x), f'(x), f''(x), Fixed Point p = "): p[5]:=plot([[0.705,0.943]],style=point, symbol=circle,symbolsize=30,co=black): p[6]:=plots[textplot]({[0.85,0.3,`f(x)`],[0.85,.25,`f'(x)`], [0.85,0.6,`f''(x)`]},co=black): plots[display](seq(p[k],k=..6)); In this Figure we see that the first derivative f '(x) is not equal to zero in the entire range considered, which is an essential condition for the application of NEWTON 's method. The iteration functions are given by g(x):=x-f(x); G(x):=x-f(x)/diff(f(x),x); g( x ) := sin( x ) + cos( x ) 2 2 G( x ):= x x 2 sin( x ) 2 cos( x ) cos( x ) + sin( x ) 2 2 alias(h=heaviside,sc=scaling,th=thickness,co=color): p[]:=plot({x,g(x),g(x)},x=0..,0..,th=3, sc=constrained,co=black): p[2]:=plot({,h(x-)},x=0...00,co=black, title="iteration Functions g(x) and G(x)"): p[3]:=plot(0.7048*h(x ),x= , 0

11 linestyle=4,co=black): p[4]:=plot([[0.7048,0.7048]],style=point, symbol=circle,symbolsize=30,co=black): p[5]:=plots[textplot]({[0.5,0.52,`g(x)`], [0.5,0.8,`G(x)`]},co=black): plots[display](seq(p[k],k=..5)); Both operators, g and G, mapp the interval x = [0, ] to itself. The iteration function G(x) has a horizontal tangent in the fixed-point p = because of G'(p) = 0, id est: quadratic convergence. In contrast, the iteration function g(x) has a horizontal tangent in x = Pi/4 = , id est: in the neighbourhood of the fixed-point. Now let's discuss the absolute derivatives of the iteration functions. abs(`g'`(x))=abs(diff(g(x),x)); g' ( x ) = cos( x ) sin( x ) 2 2 abs(`g'`(x))=abs(diff(g(x),x)); G' ( x ) = x 2 sin( x ) 2 cos( x ) 2 sin( x ) + 2 cos( x ) cos( x ) + sin( x ) 2 2 p[]:=plot({abs(diff(g(x),x)),abs(diff(g(x),x))},x=0..,0.., scaling=constrained,th=3,co=black): p[2]:=plot({,h(x-)},x=0...00,co=black, title="absolute Derivatives g'(x) and G'(x) "): p[3]:=plots[textplot]({[0.2,0.3,` g'(x) `], [0.2,0.9,` G'(x) `]},co=black): plots[display](seq(p[k],k=..3)); 2

12 This Figure illustrades that both derivatives exist on (0, ) with g'(x) < L and G'(x) < K for all x = [0, ], where K < and L <. Considering the last two Figures, we establish that both iteration functions are compatible with BANACH 's fixed-point theorem. The iterations generated by g(x) and G(x) are given as follows: x[0]:=0.5; x[]:=evalf(subs(x=0.5,g(x))); x 0 := 0.5 x := Fixed Point Iteration: for i from 2 to 0 do x[i]:=evalf(subs(x=%,g(x))) od; x 2 := x 3 := x 4 := x 5 := x 6 := x 7 := x 8 := x 9 := x 0 := The 9th iteration x[9] is identical to the MAPLE solution p = based upon the command fsolve. NEWTON 's Method: X[0]:=0.5; X[]:=evalf(subs(x=0.5,G(x))); X 0 := 0.5 X := for i from 2 to 6 do X[i]:=evalf(subs(x=%,G(x))) od; 2

13 X 2 := X 3 := X 4 := X 5 := X 6 := The 5th iteration X[5] is already identical to the MAPLE solution. In contrast, the fixed-point method needs 9 iterations. The next example is concerned with the zero form f(x) = x - exp(x^2-2) = 0. restart: f(x):=x-exp(x^2-2); f( x ) := x e ( x2 2) p:=fsolve(f(x)=0,x); # fixed-point by MAPLE p := `f '`(x):=diff(f(x),x); f '( x ):= 2 x e ( x2 2 ) `f ''`(x):=diff(f(x),x$2); f ''( x ) := 2 e ( x2 2) 4 x 2 e ( x2 2 ) alias(h=heaviside,sc=scaling,th=thickness,co=color): p[]:=plot({f(x),diff(f(x),x),diff(f(x),x$2)}, x=0..0.5,-..,th=3,co=black): p[2]:=plot({h(x-0.5),-h(x-0.5)},x= ,co=black): p[3]:=plot(0.96*h(x-0.4),x= ,linestyle=4,co=black): p[4]:=plot({-,},x=0..0.5,co=black, title="f(x), f'(x), f''(x)"): p[5]:=plot([[0.4,0.96]],style=point,symbol=circle, symbolsize=30,ytickmarks=4,co=black): p[6]:=plots[textplot]({[0.3,0.3,`f(x)`],[0.3,0.8,`f'(x)`], [0.3,-0.5,`f''(x)`]},co=black): plots[display](seq(p[k],k=..6)); 3

14 In this Figure we see that the first derivative f'(x) is not equal to zero in the entire range considered, which is a necessary condition for convergence of the NEWTON method. The iteration functions are given by g(x):=x-f(x); G(x):=x-f(x)/diff(f(x),x); g( x ) := e ( x2 2 ) G( x ) := x x e ( x2 2) 2 x e ( x2 2) p[]:=plot({x,g(x),g(x)},x=0..0.5, sc=constrained,th=3,co=black): p[2]:=plot({0.5,0.5*h(x-0.5)},x= ,co=black, title="iteration Functions g(x) and G(x)"): p[3]:=plot(0.4*h(x-0.4),x= , linestyle=4,co=black): p[4]:=plot([[0.4,0.4]],style=point,symbol=circle, symbolsize=30,co=black): p[5]:=plots[textplot]({[0.3,0.8,`g(x)`], [0.3,0.,`G(x)`]},co=black): plots[display](seq(p[k],k=..5)); 4

15 Both operators, g and G, mapp the interval x = [0, 0.5] to itself. The iteration function G(x) has a horizontal tangent in the fixed-point p = because of G'(p) = 0, id est quadratic convergence. In contrast, the iteration function g(x) has a horizontal tangent in x = 0. Now let's discuss the absolute derivatives of the iteration functions. abs(`g'`(x))=abs(diff(g(x),x)); g' ( x ) = 2 e ( 2 + R( x2) ) abs(`g'`(x))=abs(diff(g(x),x)); x G' ( x ) = ( x e ( x2 2) )( 2 e ( x2 2) 4 x 2 e ( x2 2 ) ) ( 2 x e ( x2 2) ) p[]:=plot({abs(diff(g(x),x)),abs(diff(g(x),x))}, x=0..0.5,th=3,co=black, title="absolute Derivatives g'(x) and G'(x) "): p[2]:=plot({0.25,0.25*h(x-0.5)},x= ,co=black): p[3]:=plots[textplot]({[0.25,0.,` g'(x) `], [0.33,0.05,` G'(x) `]},co=black): plots[display](seq(p[k],k=..3)); 2 5

16 This Figure illustrades that both derivatives exist on (0, 0.5) with g'(x) < L and G'(x) < K for all x = [0, 0.5], where K < and L <. Considering the last two Figures, we find that both iteration functions are compatible with BANACH's fixed-point theorm. The iterations generated by g(x) and G(x) are listed in the following: x[0]:=0.; x[]:=evalf(subs(x=0.,g(x))); x 0 := 0. x := Fixed Point Iteration: for i from 2 to 8 do x[i]:=evalf(subs(x=%,g(x))) od; x 2 := x 3 := x 4 := x 5 := x 6 := x 7 := x 8 := The 7th iteration x[7] is identical to the MAPLE solution p = based upon the command fsolve. NEWTON's Method: X[0]:=0.; X[]:=evalf(subs(x=0.,G(x))); X 0 := 0. X := for i from 2 to 5 do X[i]:=evalf(subs(x=%,G(x))) od; X 2 := X 3 :=

17 X 4 := X 5 := The 4th iteration X[4] is already identical to the MAPLE solution. In contrast, the fixed-point method needs 7 iterations. Another example is concerned with the root-finding problem f(x) = 0, where restart: f(x):=+cosh(x)*cos(x); f( x ):= + cosh( x ) cos( x ) p:=fsolve(f(x)=0,x); # fixed-point by MAPLE p := `f '`(x):=diff(f(x),x); f '( x ):= sinh( x ) cos( x ) cosh( x ) sin( x ) `f ''`(x):=diff(f(x),x$2); f ''( x ) := 2 sinh( x ) sin( x ) alias(h=heaviside,sc=scaling,th=thickness,co=color): p[]:=plot({f(x),diff(f(x),x),diff(f(x),x$2)}, x=.5..2,-7...5,th=3,co=black): p[2]:=plot({-7,-7*h(x-2),.5,.5*h(x-2)},x= ,co=black, title="f(x), f'(x), f''(x), Fixed-Point p = "): p[3]:=plot(-4.7*h(x-.88),x= ,linestyle=4,co=black): p[4]:=plot([[.88,-4.7]],style=point,symbol=circle, symbolsize=30,co=black): p[5]:=plots[textplot]({[2.03,-0.57,`f(x)`],[2.03,-5,`f'(x)`], [2.03,-6.6,`f''(x)`]},co=black): plots[display](seq(p[k],k=..5)); In this Figure we see that the first derivative f '(x) is not equal to zero in the vicinity of the fixed-point, which is a necessary condition for the application of NEWTON 's method. Its iteration function is given by G(xi):=xi-f(xi)/diff(f(xi),xi); G(x):=x-f(x)/diff(f(x),x); 7

18 G( ξ ):= ξ d dξ f( ξ ) + cosh( x ) cos( x ) G( x ) := x sinh( x ) cos( x ) cosh( x ) sin( x ) x[n+]:=g(x[n]); x n + := G( x n ) x[0]:=2; x[]:=evalf(subs(x=2,g(x))); x 0 := 2 x := for i from 2 to 5 do x[i]:=evalf(subs(x=%,g(x))) od; x 2 := x 3 := x 4 := x 5 := The 4th iteration x[4], beginning with a starting point x[0] = 2, is already identical to the MAPLE solution. Selecting the starting point x[0] =.2, then the 6th iteration x[6] is identical to the fixed point. However, the starting point x[0] = does not lead to convergence. Improving the convergence or obtaining convergence, necessary in cases of small derivatives f '(x) <<, one can extend the classical NEWTON method in the following way: restart: X[n+]:=G(X[n]); G(X):=X-lambda*h(X)*f(X); X n + := G( X n ) G( X ):= X λ h( X ) f( X) `G'(X)`:=diff(G(X),X); G'(X) := λ d dx h( X ) f( X) λ h( X) d dx f( X) lambda[lagrange]:=solve(diff(g(x),x)=0,lambda); λ LAGRANGE := d + dx h( X ) f( X ) h( X) d dx f( X) G(X):=subs(lambda=%,G(X)); G( X ) := X Assuming h(x) = -exp(-x), we arrive at h(x):=-exp(-x); f( ξ ) h( X ) f( X) d + dx h( X ) f( X ) h( X) d dx f( X) h( x ) := e ( x ) 8

19 G(x):=subs({X=x,h(X)=h(x)},G(X)); e ( x ) f( x ) G( x ) := x + d dx ( x ) e( ) f( x ) e ( x ) d dx f( x ) G(xi):=xi-f(xi)/(`f '`(xi)-f(xi)); G( ξ ) := ξ f( ξ ) f '( ξ ) f( ξ ) f(x):=+cosh(x)*cos(x); f( x ):= + cosh( x ) cos( x ) G(x):=x-f(x)/(diff(f(x),x)-f(x)); G( x ) := x + cosh( x ) cos( x ) sinh( x ) cos( x ) cosh( x ) sin( x ) cosh( x ) cos( x ) x[0]:=2; x[]:=evalf(subs(x=2,g(x))); x 0 := 2 x := for i from 2 to 5 do x[i]:=evalf(subs(x=%,g(x))) od; x 2 := x 3 := x 4 := x 5 := We see, the iteration has been improved from 4 to 3 iterations. With a starting point x[0] =, the 4th iteration leads to the fixed-point instead of divergence by using the classical NEWTON method. The fixed-point iteration x[n+]:=g(x[n])=x[n]-h(x[n])*f(x[n]); x n + := g( x n ) = x n h( x n ) f( x n ) does not fulfill BANACH 's theorem for h(x) =. A compatible function is given by the function h(x) = -exp(-x) introduced before. Thus, we arrive at the following iteration function: g(x):=x+exp(-x)*f(x); g( x ) := x + e ( x ) ( + cosh( x ) cos( x ) ) x[0]:=2; x[]:=evalf(subs(x=%,g(x))); x 0 := 2 x := for i from 2 to 22 do x[i]:=evalf(subs(x=%,g(x))) od; x 2 := x 3 := x 4 :=

20 x 5 := x 6 := x 7 := x 8 := x 9 := x 0 := x := x 2 := x 3 := x 4 := x 5 := x 6 := x 7 := x 8 := x 9 := x 20 := x 2 := x 22 := The 20th iteration x[20] leads to the fixed-point. In contrast to only 4 or 3 iterations based upon NEWTON 's classical or extended method, respectively. The following two Figures should illustrade that both iteration functions, g(x) and G(x), are compatible with BANACH 's fixed-point theorem. alias(h=heaviside,sc=scaling,th=thickness,co=color): p[]:=plot({x,g(x),g(x)},x=.5..2,.5..2, sc=constrained,th=3,co=black): p[2]:=plot(2*h(x-2),x= ,co=black): p[3]:=plot(2,x=.5..2,co=black, title="g(x) in comparison with g(x)"): p[4]:=plot([[.875,.875]],style=point, symbol=circle,symbolsize=30,co=black): p[5]:=plots[textplot]({[.57,.9,`g(x)`], [.57,.74,`g(x)`]},co=black): p[6]:=plot(.875*h(x-.875),x= , linestyle=4,co=black): plots[display](seq(p[k],k=..6)); 20

21 Both operators, G and g, mapp the interval x = [.5, 2] to itself. The function G(x) has a horizontal tangent in the fixed-point p = within the interval considered. This means quadratic convergence of the extended NEWTON method. Corresponding to BANACH 's theorem the absolute derivatives g'(x) and G'(x) should be less than one as shown in the next Figure. abs(`g'`(x))=abs(diff(g(x),x)); g' ( x ) = + e ( x ) ( + cosh( x ) cos( x ) ) e ( x ) ( sinh( x ) cos( x ) cosh( x ) sin( x ) ) abs(`g'`(x))=abs(diff(g(x),x)); G' ( x ) = sinh( x ) ( ) sinh( x ) cos( x ) cosh( x ) sin( x ) cos x cosh( x ) sin( x ) cosh( x ) cos( x ) ( + cosh( x ) cos( x )) ( 2 sinh( x ) sin( x ) sinh( x ) cos( x ) + cosh( x ) sin( x )) + 2 ( sinh( x ) cos( x ) cosh( x ) sin( x ) cosh( x ) cos( x )) p[]:=plot({abs(diff(g(x),x)),abs(diff(g(x),x))}, x=.5..2,0..0.5,th=3,co=black): p[2]:=plot(0.5*h(x-2),x= ,co=black): p[3]:=plot(0.5,x=.5..2,co=black, title="absolute Derivatives g'(x) and G'(x) "): p[4]:=plot(0.3655*h(x-.875),x= , linestyle=4,co=black): p[5]:=plot([[.875,0.3655]],style=point,symbol=circle, symbolsize=30,co=black): p[6]:=plots[textplot]({[.6,0.32,` g'(x) `], [.6,0.5,` G'(x) `]},co=black): plots[display](seq(p[k],k=..6)); 2

22 The last two Figures illustrade that both iterations, extended NEWTON and fixed-point, are compatible with BANACH 's theorem. Both operators, g and G, mapp the interval x = [.5, 2] to itself. In addition, both derivatives exist on (.5, 2) with g'(x) < L and G'(x) < K for all x = [.5, 2], where K < L <. The number L is the LIPPSCHITZ constant. NEWTON 's method converges quadratically because of G'(p) = 0. Another example illustrades as before that the extended NEWTON method is most effective in cases when the first derivative f '(x[n]) is equal to zero or very small for any step x[n] if the classical NEWTON method does not work. restart: f(x):=x-2*sin(x); f( x ) := x 2 sin( x ) p:=fsolve(f(x)=0,x,..2); # fixed-point immediately found by MAPLE command "fsolve" p := `f '`(x):=diff(f(x),x); f '( x ) := 2 cos( x ) `f '`(P):=evalf(subs(x=p,%)); f '( P) := `f ''`(x):=diff(f(x),x$2); f ''( x ):= 2 sin( x ) `f ''`(P):=evalf(subs(x=p,%)); f ''( P) := alias(h=heaviside,th=thickness,sc=scaling,co=color): p[]:=plot({f(x),diff(f(x),x),diff(f(x),x$2)}, x=0..pi,-..pi,sc=constrained,th=3,co=black): p[2]:=plot({-,pi,pi*h(x-pi),-h(x-pi)}, x=0...00*pi,co=black, title="f(x), f'(x), f''(x), Fixed-Point p"): p[3]:=plot(.638*h(x-.8955),x= , linestyle=4,co=black): p[4]:=plot([[.8955,.638]],style=point, symbol=circle,symbolsize=30,co=black): 22

23 p[5]:=plots[textplot]({[2.8,.5,`f(x)`], [2.0,2.5,`f'(x)`],[0.5,.5,`f''(x)`]},co=black): plots[display](seq(p[k],k=..5)); In the vicinity of the fixed-point the first derivative f '(x) is not very small so that the classical NEWTON method can work, if the starting-point x[0] is close enough to the expected fixed-point. However, in order to test the extended NEWTON formular, the starting-point should be selected, for instance, at x = close to the zero of f '(x): X[ZERO]:=fsolve(diff(f(x)=0,x)); X ZERO := Classical NEWTON Method G(xi):=xi-f(xi)/`f '`(xi); G( ξ ) := ξ f( ξ ) f '( ξ ) G(x):=x-f(x)/diff(f(x),x); x 2 sin( x ) G( x ) := x 2 cos( x ) x[0]:=; x[]:=evalf(subs(x=,g(x))); # starting-point x 0 := x := for i from 2 to 7 do x[i]:=evalf(subs(x=%,g(x))) od; x 2 := x 3 := x 4 :=

24 x 5 := x 6 := x 7 := In the vicinity of the selected starting-point x[0] = the first derivative f '(x) is very small. Thus, the classical NEWTON method does not converge. With the same starting-point we will obtain convergence by applying the extended NEWTON formular: restart: f(x):=x-2*sin(x); f( x ) := x 2 sin( x ) p:=fsolve(f(x)=0,x,..2); p := G(xi):=xi-f(xi)/(`f '`(xi)-f(xi)); G( ξ ) := ξ G(x):=x-f(x)/(diff(f(x),x)-f(x)); G( x ) := x f( ξ ) f '( ξ ) f( ξ ) x 2 sin( x ) 2 cos( x) x + 2 sin( x ) x[0]:=; x[]:=evalf(subs(x=,g(x))); # starting-point x 0 := x := for i from 2 to 7 do x[i]:=evalf(subs(x=%,g(x))) od; x 2 := x 3 := x 4 := x 5 := x 6 := x 7 := We see, the 5th iteration x[5] leads already to convergence although the first derivative f '(x) is very small in the neighbourhood of the selected starting-point. The extended NEWTON method is most effective in cases of small derivatives f '(x). This worksheet is concerned with finding numerical solutions of non-linear equations in a single unknown. A generalization to systems of non-linear equations has been discussed in more detail, for instance, by BETTEN, J. in: Finite Elemente für Ingenieure 2, zweite Auflage, 2004, Springer-Verlag, Berlin / Heidelberg / New York. 24

Representations of Piecewise Continuous Functions

Representations of Piecewise Continuous Functions Representations of Piecewise Continuous Functions Univ.-Prof. Dr.-Ing. habil. Josef BETTEN RWTH Aachen University Templergraben 55 D-5256 A a c h e n, Germany

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

SECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes.

SECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes. SECTION A 1. State the maximal domain and range of the function f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes. 2. By evaluating f(0),

More information

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

AP Calculus Testbank (Chapter 9) (Mr. Surowski) AP Calculus Testbank (Chapter 9) (Mr. Surowski) Part I. Multiple-Choice Questions n 1 1. The series will converge, provided that n 1+p + n + 1 (A) p > 1 (B) p > 2 (C) p >.5 (D) p 0 2. The series

More information

Solutions of Equations in One Variable. Newton s Method

Solutions of Equations in One Variable. Newton s Method Solutions of Equations in One Variable Newton s Method Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011 Brooks/Cole,

More information

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS Math 473: Practice Problems for Test 1, Fall 011, SOLUTIONS Show your work: 1. (a) Compute the Taylor polynomials P n (x) for f(x) = sin x and x 0 = 0. Solution: Compute f(x) = sin x, f (x) = cos x, f

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

FIXED POINT ITERATION

FIXED POINT ITERATION FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial

More information

MA 113 Calculus I Fall 2012 Exam 3 13 November Multiple Choice Answers. Question

MA 113 Calculus I Fall 2012 Exam 3 13 November Multiple Choice Answers. Question MA 113 Calculus I Fall 2012 Exam 3 13 November 2012 Name: Section: Last 4 digits of student ID #: This exam has ten multiple choice questions (five points each) and five free response questions (ten points

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

Math 131 Final Exam Spring 2016

Math 131 Final Exam Spring 2016 Math 3 Final Exam Spring 06 Name: ID: multiple choice questions worth 5 points each. Exam is only out of 00 (so there is the possibility of getting more than 00%) Exam covers sections. through 5.4 No graphing

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places. NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Calculus I Exam 1 Review Fall 2016

Calculus I Exam 1 Review Fall 2016 Problem 1: Decide whether the following statements are true or false: (a) If f, g are differentiable, then d d x (f g) = f g. (b) If a function is continuous, then it is differentiable. (c) If a function

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

FINAL EXAMINATION, MAT 2010 December 12, Cell phones are strictly prohibited!

FINAL EXAMINATION, MAT 2010 December 12, Cell phones are strictly prohibited! FINAL EXAMINATION, MAT 00 December, 03 Write your solutions in a blue book. To receive full credit you must show all work. You are allowed to use an approved graphing calculator unless otherwise indicated.

More information

f (r) (a) r! (x a) r, r=0

f (r) (a) r! (x a) r, r=0 Part 3.3 Differentiation v1 2018 Taylor Polynomials Definition 3.3.1 Taylor 1715 and Maclaurin 1742) If a is a fixed number, and f is a function whose first n derivatives exist at a then the Taylor polynomial

More information

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy, Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center

More information

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0 8.7 Taylor s Inequality Math 00 Section 005 Calculus II Name: ANSWER KEY Taylor s Inequality: If f (n+) is continuous and f (n+) < M between the center a and some point x, then f(x) T n (x) M x a n+ (n

More information

Spring 2015 Sample Final Exam

Spring 2015 Sample Final Exam Math 1151 Spring 2015 Sample Final Exam Final Exam on 4/30/14 Name (Print): Time Limit on Final: 105 Minutes Go on carmen.osu.edu to see where your final exam will be. NOTE: This exam is much longer than

More information

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period:

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period: WORKSHEET: Series, Taylor Series AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period: 1 Part I. Multiple-Choice Questions (5 points each; please circle the correct answer.) 1. The

More information

Week 1: need to know. November 14, / 20

Week 1: need to know. November 14, / 20 Week 1: need to know How to find domains and ranges, operations on functions (addition, subtraction, multiplication, division, composition), behaviors of functions (even/odd/ increasing/decreasing), library

More information

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from Lecture 44 Solution of Non-Linear Equations Regula-Falsi Method Method of iteration Newton - Raphson Method Muller s Method Graeffe s Root Squaring Method Newton -Raphson Method An approximation to the

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State University

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

Have a Safe and Happy Break

Have a Safe and Happy Break Math 121 Final EF: December 10, 2013 Name Directions: 1 /15 2 /15 3 /15 4 /15 5 /10 6 /10 7 /20 8 /15 9 /15 10 /10 11 /15 12 /20 13 /15 14 /10 Total /200 1. No book, notes, or ouiji boards. You may use

More information

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable. Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written

More information

Chapter 8: Taylor s theorem and L Hospital s rule

Chapter 8: Taylor s theorem and L Hospital s rule Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))

More information

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. For example f(x) = 1 1 x = 1 + x + x2 + x 3 + = ln(1 + x) = x x2 2

More information

Tuesday, Feb 12. These slides will cover the following. [cos(x)] = sin(x) 1 d. 2 higher-order derivatives. 3 tangent line problems

Tuesday, Feb 12. These slides will cover the following. [cos(x)] = sin(x) 1 d. 2 higher-order derivatives. 3 tangent line problems Tuesday, Feb 12 These slides will cover the following. 1 d dx [cos(x)] = sin(x) 2 higher-order derivatives 3 tangent line problems 4 basic differential equations Proof First we will go over the following

More information

Calculus 221 worksheet

Calculus 221 worksheet Calculus 221 worksheet Graphing A function has a global maximum at some a in its domain if f(x) f(a) for all other x in the domain of f. Global maxima are sometimes also called absolute maxima. A function

More information

Numerical Optimization

Numerical Optimization Unconstrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 01, India. NPTEL Course on Unconstrained Minimization Let f : R n R. Consider the optimization problem:

More information

function independent dependent domain range graph of the function The Vertical Line Test

function independent dependent domain range graph of the function The Vertical Line Test Functions A quantity y is a function of another quantity x if there is some rule (an algebraic equation, a graph, a table, or as an English description) by which a unique value is assigned to y by a corresponding

More information

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2 Bisection Method Given continuous function f (x) on the interval [a, b] with f (a) f (b) < 0, there must be a root in (a, b). To find a root: set [a 1, b 1 ] = [a, b]. set p 1 = a 1+b 1 2 and compute f

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

14 Increasing and decreasing functions

14 Increasing and decreasing functions 14 Increasing and decreasing functions 14.1 Sketching derivatives READING Read Section 3.2 of Rogawski Reading Recall, f (a) is the gradient of the tangent line of f(x) at x = a. We can use this fact to

More information

SOLUTIONS TO MIXED REVIEW

SOLUTIONS TO MIXED REVIEW Math 16: SOLUTIONS TO MIXED REVIEW R1.. Your graphs should show: (a) downward parabola; simple roots at x = ±1; y-intercept (, 1). (b) downward parabola; simple roots at, 1; maximum at x = 1/, by symmetry.

More information

Handout 5, Summer 2014 Math May Consider the following table of values: x f(x) g(x) f (x) g (x)

Handout 5, Summer 2014 Math May Consider the following table of values: x f(x) g(x) f (x) g (x) Handout 5, Summer 204 Math 823-7 29 May 204. Consider the following table of values: x f(x) g(x) f (x) g (x) 3 4 8 4 3 4 2 9 8 8 3 9 4 Let h(x) = (f g)(x) and l(x) = g(f(x)). Compute h (3), h (4), l (8),

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson JUST THE MATHS UNIT NUMBER.5 DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) by A.J.Hobson.5. Maclaurin s series.5. Standard series.5.3 Taylor s series.5.4 Exercises.5.5 Answers to exercises

More information

Numerical Analysis: Solving Nonlinear Equations

Numerical Analysis: Solving Nonlinear Equations Numerical Analysis: Solving Nonlinear Equations Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

AP Calculus 2004 AB (Form B) FRQ Solutions

AP Calculus 2004 AB (Form B) FRQ Solutions AP Calculus 4 AB (Form B) FRQ Solutions Louis A. Talman, Ph.D. Emeritus Professor of Mathematics Metropolitan State University of Denver July, 7 Problem. Part a The curve intersects the x-axis at x =,

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Mathematics National Institute of Technology Durgapur Durgapur-713209 email: anita.buie@gmail.com 1 . Chapter 4 Solution of Non-linear

More information

MATH 2053 Calculus I Review for the Final Exam

MATH 2053 Calculus I Review for the Final Exam MATH 05 Calculus I Review for the Final Exam (x+ x) 9 x 9 1. Find the limit: lim x 0. x. Find the limit: lim x + x x (x ).. Find lim x (x 5) = L, find such that f(x) L < 0.01 whenever 0 < x

More information

Calculus I Practice Problems 8: Answers

Calculus I Practice Problems 8: Answers Calculus I Practice Problems : Answers. Let y x x. Find the intervals in which the function is increasing and decreasing, and where it is concave up and concave down. Sketch the graph. Answer. Differentiate

More information

MTH Calculus with Analytic Geom I TEST 1

MTH Calculus with Analytic Geom I TEST 1 MTH 229-105 Calculus with Analytic Geom I TEST 1 Name Please write your solutions in a clear and precise manner. SHOW your work entirely. (1) Find the equation of a straight line perpendicular to the line

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

Yield Criteria.

Yield Criteria. Yield Criteria Univ.-Prof. Dr.-Ing. habil. Josef BETTEN RWTH Aachen University Mathematical Models in Materials Science and Continuum Mechanics Augustinerbach 4-0 D-5056 A a c h e n, Germany

More information

(2) Let f(x) = a 2 x if x<2, 4 2x 2 ifx 2. (b) Find the lim f(x). (c) Find all values of a that make f continuous at 2. Justify your answer.

(2) Let f(x) = a 2 x if x<2, 4 2x 2 ifx 2. (b) Find the lim f(x). (c) Find all values of a that make f continuous at 2. Justify your answer. (1) Let f(x) = x x 2 9. (a) Find the domain of f. (b) Write an equation for each vertical asymptote of the graph of f. (c) Write an equation for each horizontal asymptote of the graph of f. (d) Is f odd,

More information

Math 106 Fall 2014 Exam 2.1 October 31, ln(x) x 3 dx = 1. 2 x 2 ln(x) + = 1 2 x 2 ln(x) + 1. = 1 2 x 2 ln(x) 1 4 x 2 + C

Math 106 Fall 2014 Exam 2.1 October 31, ln(x) x 3 dx = 1. 2 x 2 ln(x) + = 1 2 x 2 ln(x) + 1. = 1 2 x 2 ln(x) 1 4 x 2 + C Math 6 Fall 4 Exam. October 3, 4. The following questions have to do with the integral (a) Evaluate dx. Use integration by parts (x 3 dx = ) ( dx = ) x3 x dx = x x () dx = x + x x dx = x + x 3 dx dx =

More information

1. Which one of the following points is a singular point of. f(x) = (x 1) 2/3? f(x) = 3x 3 4x 2 5x + 6? (C)

1. Which one of the following points is a singular point of. f(x) = (x 1) 2/3? f(x) = 3x 3 4x 2 5x + 6? (C) Math 1120 Calculus Test 3 November 4, 1 Name In the first 10 problems, each part counts 5 points (total 50 points) and the final three problems count 20 points each Multiple choice section Circle the correct

More information

Homework 2 solutions

Homework 2 solutions Homework 2 solutions Section 2.1: Ex 1,2,3,4,6,11; AP 1 (18 points; Ex 3, AP 1 graded, 4 pts each; 2 pts to try the others) 1. Determine if each function has a unique fixed point on the specified interval.

More information

Example 1a ~ Like # 1-39

Example 1a ~ Like # 1-39 Example 1a ~ Like # 1-39 f(x) = A. The domain is {x x 2 1 0} = {x x 1} DOM: (, 1) ( 1, 1) (1, ) B. The x- and y-intercepts are both 0. C. Since f( x) = f(x), the function f is even. The curve is symmetric

More information

Math 180, Final Exam, Fall 2012 Problem 1 Solution

Math 180, Final Exam, Fall 2012 Problem 1 Solution Math 80, Final Exam, Fall 0 Problem Solution. Find the derivatives of the following functions: (a) ln(ln(x)) (b) x 6 + sin(x) e x (c) tan(x ) + cot(x ) (a) We evaluate the derivative using the Chain Rule.

More information

On Newton-type methods with cubic convergence

On Newton-type methods with cubic convergence Journal of Computational and Applied Mathematics 176 (2005) 425 432 www.elsevier.com/locate/cam On Newton-type methods with cubic convergence H.H.H. Homeier a,b, a Science + Computing Ag, IT Services Muenchen,

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

Test one Review Cal 2

Test one Review Cal 2 Name: Class: Date: ID: A Test one Review Cal 2 Short Answer. Write the following expression as a logarithm of a single quantity. lnx 2ln x 2 ˆ 6 2. Write the following expression as a logarithm of a single

More information

" $ CALCULUS 2 WORKSHEET #21. t, y = t + 1. are A) x = 0, y = 0 B) x = 0 only C) x = 1, y = 0 D) x = 1 only E) x= 0, y = 1

 $ CALCULUS 2 WORKSHEET #21. t, y = t + 1. are A) x = 0, y = 0 B) x = 0 only C) x = 1, y = 0 D) x = 1 only E) x= 0, y = 1 CALCULUS 2 WORKSHEET #2. The asymptotes of the graph of the parametric equations x = t t, y = t + are A) x = 0, y = 0 B) x = 0 only C) x =, y = 0 D) x = only E) x= 0, y = 2. What are the coordinates of

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

Name: Instructor: 1. a b c d e. 15. a b c d e. 2. a b c d e a b c d e. 16. a b c d e a b c d e. 4. a b c d e... 5.

Name: Instructor: 1. a b c d e. 15. a b c d e. 2. a b c d e a b c d e. 16. a b c d e a b c d e. 4. a b c d e... 5. Name: Instructor: Math 155, Practice Final Exam, December The Honor Code is in effect for this examination. All work is to be your own. No calculators. The exam lasts for 2 hours. Be sure that your name

More information

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable) Nonlinear equations Definition A nonlinear equation is any equation of the form where f is a nonlinear function. Nonlinear equations x 2 + x + 1 = 0 (f : R R) f(x) = 0 (x cos y, 2y sin x) = (0, 0) (f :

More information

LIMITS, AND WHAT THEY HAVE TO DO WITH CONTINUOUS FUNCTIONS

LIMITS, AND WHAT THEY HAVE TO DO WITH CONTINUOUS FUNCTIONS 1.3/27/13 LIMITS, AND WHAT THEY HAVE TO DO WITH CONTINUOUS FUNCTIONS Probably the hardest thing to understand and to remember, about limits, is that the limit of a function at a point has in general no

More information

A.P. Calculus BC Test Two Section One Multiple-Choice Calculators Allowed Time 40 minutes Number of Questions 15

A.P. Calculus BC Test Two Section One Multiple-Choice Calculators Allowed Time 40 minutes Number of Questions 15 A.P. Calculus BC Test Two Section One Multiple-Choice Calculators Allowed Time 40 minutes Number of Questions 15 The scoring for this section is determined by the formula [C (0.25 I)] 1.8 where C is the

More information

Topics and Concepts. 1. Limits

Topics and Concepts. 1. Limits Topics and Concepts 1. Limits (a) Evaluating its (Know: it exists if and only if the it from the left is the same as the it from the right) (b) Infinite its (give rise to vertical asymptotes) (c) Limits

More information

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Taylor and Maclaurin Series. Approximating functions using Polynomials. Taylor and Maclaurin Series Approximating functions using Polynomials. Approximating f x = e x near x = 0 In order to approximate the function f x = e x near x = 0, we can use the tangent line (The Linear

More information

MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series.

MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series. MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series. The objective of this section is to become familiar with the theory and application of power series and Taylor series. By

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

AP Calculus Summer Prep

AP Calculus Summer Prep AP Calculus Summer Prep Topics from Algebra and Pre-Calculus (Solutions are on the Answer Key on the Last Pages) The purpose of this packet is to give you a review of basic skills. You are asked to have

More information

Completion Date: Monday February 11, 2008

Completion Date: Monday February 11, 2008 MATH 4 (R) Winter 8 Intermediate Calculus I Solutions to Problem Set #4 Completion Date: Monday February, 8 Department of Mathematical and Statistical Sciences University of Alberta Question. [Sec..9,

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

You can learn more about the services offered by the teaching center by visiting

You can learn more about the services offered by the teaching center by visiting MAC 232 Exam 3 Review Spring 209 This review, produced by the Broward Teaching Center, contains a collection of questions which are representative of the type you may encounter on the exam. Other resources

More information

Math 121 Test 3 - Review 1. Use differentials to approximate the following. Compare your answer to that of a calculator

Math 121 Test 3 - Review 1. Use differentials to approximate the following. Compare your answer to that of a calculator Math Test - Review Use differentials to approximate the following. Compare your answer to that of a calculator.. 99.. 8. 6. Consider the graph of the equation f(x) = x x a. Find f (x) and f (x). b. Find

More information

Numerical Methods Lecture 3

Numerical Methods Lecture 3 Numerical Methods Lecture 3 Nonlinear Equations by Pavel Ludvík Introduction Definition (Root or zero of a function) A root (or a zero) of a function f is a solution of an equation f (x) = 0. We learn

More information

Partial Derivatives October 2013

Partial Derivatives October 2013 Partial Derivatives 14.3 02 October 2013 Derivative in one variable. Recall for a function of one variable, f (a) = lim h 0 f (a + h) f (a) h slope f (a + h) f (a) h a a + h Partial derivatives. For a

More information

c) xy 3 = cos(7x +5y), y 0 = y3 + 7 sin(7x +5y) 3xy sin(7x +5y) d) xe y = sin(xy), y 0 = ey + y cos(xy) x(e y cos(xy)) e) y = x ln(3x + 5), y 0

c) xy 3 = cos(7x +5y), y 0 = y3 + 7 sin(7x +5y) 3xy sin(7x +5y) d) xe y = sin(xy), y 0 = ey + y cos(xy) x(e y cos(xy)) e) y = x ln(3x + 5), y 0 Some Math 35 review problems With answers 2/6/2005 The following problems are based heavily on problems written by Professor Stephen Greenfield for his Math 35 class in spring 2005. His willingness to

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

APPLICATIONS OF DIFFERENTIATION

APPLICATIONS OF DIFFERENTIATION 4 APPLICATIONS OF DIFFERENTIATION APPLICATIONS OF DIFFERENTIATION The method we used to sketch curves in Section 4.5 was a culmination of much of our study of differential calculus. The graph was the final

More information

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Taylor and Maclaurin Series. Approximating functions using Polynomials. Taylor and Maclaurin Series Approximating functions using Polynomials. Approximating f x = e x near x = 0 In order to approximate the function f x = e x near x = 0, we can use the tangent line (The Linear

More information

Formulas that must be memorized:

Formulas that must be memorized: Formulas that must be memorized: Position, Velocity, Acceleration Speed is increasing when v(t) and a(t) have the same signs. Speed is decreasing when v(t) and a(t) have different signs. Section I: Limits

More information

MAPLE Worksheet Number 7 Derivatives in Calculus

MAPLE Worksheet Number 7 Derivatives in Calculus MAPLE Worksheet Number 7 Derivatives in Calculus The MAPLE command for computing the derivative of a function depends on which of the two ways we used to define the function, as a symbol or as an operation.

More information

Simple Iteration, cont d

Simple Iteration, cont d Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence

More information

2. Which of the following is an equation of the line tangent to the graph of f(x) = x 4 + 2x 2 at the point where

2. Which of the following is an equation of the line tangent to the graph of f(x) = x 4 + 2x 2 at the point where AP Review Chapter Name: Date: Per: 1. The radius of a circle is decreasing at a constant rate of 0.1 centimeter per second. In terms of the circumference C, what is the rate of change of the area of the

More information

MA2AA1 (ODE s): The inverse and implicit function theorem

MA2AA1 (ODE s): The inverse and implicit function theorem MA2AA1 (ODE s): The inverse and implicit function theorem Sebastian van Strien (Imperial College) February 3, 2013 Differential Equations MA2AA1 Sebastian van Strien (Imperial College) 0 Some of you did

More information

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University Newton s Iterative s Peter W. White white@tarleton.edu Department of Mathematics Tarleton State University Fall 2018 / Numerical Analysis Overview Newton s Iterative s Newton s Iterative s Newton s Iterative

More information

Math 31A Differential and Integral Calculus. Final

Math 31A Differential and Integral Calculus. Final Math 31A Differential and Integral Calculus Final Instructions: You have 3 hours to complete this exam. There are eight questions, worth a total of??? points. This test is closed book and closed notes.

More information

What will you learn?

What will you learn? Section 2.2 Basic Differentiation Rules & Rates of Change Calc What will you learn? Find the derivative using the Constant Rule Find the derivative using the Power Rule Find the derivative using the Constant

More information

Dr. Sophie Marques. MAM1020S Tutorial 8 August Divide. 1. 6x 2 + x 15 by 3x + 5. Solution: Do a long division show your work.

Dr. Sophie Marques. MAM1020S Tutorial 8 August Divide. 1. 6x 2 + x 15 by 3x + 5. Solution: Do a long division show your work. Dr. Sophie Marques MAM100S Tutorial 8 August 017 1. Divide 1. 6x + x 15 by 3x + 5. 6x + x 15 = (x 3)(3x + 5) + 0. 1a 4 17a 3 + 9a + 7a 6 by 3a 1a 4 17a 3 + 9a + 7a 6 = (4a 3 3a + a + 3)(3a ) + 0 3. 1a

More information

Math Exam 1a. c) lim tan( 3x. 2) Calculate the derivatives of the following. DON'T SIMPLIFY! d) s = t t 3t

Math Exam 1a. c) lim tan( 3x. 2) Calculate the derivatives of the following. DON'T SIMPLIFY! d) s = t t 3t Math 111 - Eam 1a 1) Evaluate the following limits: 7 3 1 4 36 a) lim b) lim 5 1 3 6 + 4 c) lim tan( 3 ) + d) lim ( ) 100 1+ h 1 h 0 h ) Calculate the derivatives of the following. DON'T SIMPLIFY! a) y

More information

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now

More information

Math 113 (Calculus 2) Exam 4

Math 113 (Calculus 2) Exam 4 Math 3 (Calculus ) Exam 4 November 0 November, 009 Sections 0, 3 7 Name Student ID Section Instructor In some cases a series may be seen to converge or diverge for more than one reason. For such problems

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Chapter 4: More Applications of Differentiation

Chapter 4: More Applications of Differentiation Chapter 4: More Applications of Differentiation Autumn 2017 Department of Mathematics Hong Kong Baptist University 1 / 68 In the fall of 1972, President Nixon announced that, the rate of increase of inflation

More information

AP Calculus 2004 AB FRQ Solutions

AP Calculus 2004 AB FRQ Solutions AP Calculus 4 AB FRQ Solutions Louis A. Talman, Ph. D. Emeritus Professor of Mathematics Metropolitan State University of Denver July, 7 Problem. Part a The function F (t) = 8 + 4 sin(t/) gives the rate,

More information

Section 3.7. Rolle s Theorem and the Mean Value Theorem

Section 3.7. Rolle s Theorem and the Mean Value Theorem Section.7 Rolle s Theorem and the Mean Value Theorem The two theorems which are at the heart of this section draw connections between the instantaneous rate of change and the average rate of change of

More information

Example: Limit definition. Geometric meaning. Geometric meaning, y. Notes. Notes. Notes. f (x, y) = x 2 y 3 :

Example: Limit definition. Geometric meaning. Geometric meaning, y. Notes. Notes. Notes. f (x, y) = x 2 y 3 : Partial Derivatives 14.3 02 October 2013 Derivative in one variable. Recall for a function of one variable, f (a) = lim h 0 f (a + h) f (a) h slope f (a + h) f (a) h a a + h Partial derivatives. For a

More information

b n x n + b n 1 x n b 1 x + b 0

b n x n + b n 1 x n b 1 x + b 0 Math Partial Fractions Stewart 7.4 Integrating basic rational functions. For a function f(x), we have examined several algebraic methods for finding its indefinite integral (antiderivative) F (x) = f(x)

More information