Project on Newton-Raphson Method
|
|
- Eunice Gibson
- 6 years ago
- Views:
Transcription
1 Project on Newton-Raphson Method Nguyen Quan Ba Hong Doan Tran Nguyen Tung Students at Faculty of Math and Computer Science, Ho Chi Minh University of Science, Vietnam blog. June 9, 016 Abstract This paper contains my team s notes about the Newton-Raphson method. Student ID: Student ID: Copyright c 016 by Nguyen Quan Ba Hong, Student at Ho Chi Minh University of Science, Vietnam. This document may be copied freely for the purposes of education and non-commercial research. Visit my site get more. 1
2 Contents 1 Introduction Historical notes Later studies A glance at Newton-Raphson method The Newton-Raphson Method 4.1 Geometric view point Analytical view point Selected examples 7 4 The Newton-Raphson method has fallen! The Newton method can go bad Drawbacks What is wrong with Newton-Raphson Newton-Raphson method is not always applicable Analysis and explain 10 6 Convergence s conditions 11 7 Improvements Cubic iteration Householder s iteration High order iteration Householder s methods Modified methods Generalization Convergence of Newton-Raphson method Newton s method for several variables 17 1 Extension to systems of equations 19
3 1 Introduction 1.1 Historical notes Since it is not possible to solve all equations of the form fx = 0 exactly, an efficient method of approximating solutions is useful. The algorithm discussed in this paper was discovered by Sir Issac Newton, who formulated the result in Later improved by Joseph Raphson in 1690, the algorithm is presently known as the Newton-Raphson method, or more commonly Newton s method. Newton s method involves choosing an initial guess x 0, and then, through an iterative process, finding a sequence of numbers x 0, x 1, x, x 3,... that converge to a solution. Some functions may have several roots. Later we see that the root which Newton s method converges to depends on the initial guess x 0. The behavior of Newton s method, or the pattern of which initial guesses lead to which zeros, can be interesting even for polynomials. When generalized to the complex plane, Newton s method leads to beautiful pictures. A method for finding the roots of an arbitrary function that uses the derivative was first circulated by Isaac Newton in John Wallis published Newton s method in 1685, and in 1690 Joseph Raphson published an improved version, essentially the form in which we use it today. Newton s work was done in 1669 but published much later. Numerical methods related to the Newton Method were used by alkāshī, Viéte, Briggs, and Oughtred, all many years before Newton. Raphson, some 0 years after Newton, got close to the equation, but only for polynomials P y of degree 3, 4, 5,..., 10. Given an estimate g for a root, Raphson computes an improved estimate g + x. He sets P g + x = 0, expands, discards terms in x k with k, and solves for x. For polynomials, Raphson s procedure is equivalent to linear approximation. Raphson, like Newton, seems unaware of the connection between his method and the derivative. The connection was made about 50 years later Simpson, Euler, and the Newton Method finally moved beyond polynomial equations. The familiar geometric interpretation of the Newton Method may have been first used by Mourraille Analysis of the convergence of the Newton Method had to wait until Fourier and Cauchy in the 180s. 1. Later studies. The method was then studied and generalized by other mathematicians like Simpson , Mourraille , Cauchy , Kantorovich ,... The important question of the choice of the starting point was first approached by Mourraille in 1768 and the difficulty to make this choice is 3
4 the main drawback of the algorithm. Although the bisection method will always converge on the root, the rate of convergence is very slow. A faster method for converging on a single root of a function is the Newton- Raphson method. Perhaps it is the most widely used method of all locating formulas. 1.3 A glance at Newton-Raphson method This section is concerned with the problem of root location; i.e. finding those values of x which satisfy an equation of the form fx = 0 for a given function fx. An initial estimate of the root is found by drawing a graph of the function in the neighborhood of the root. This estimate is then improved by using a technique known as the Newton-Raphson method. The method is based upon a knowledge of the tangent to the curve near the root. It is an iterative method in that it can be used repeatedly to continually improve the accuracy of the root. The Newton-Raphson method, or Newton Method, is a powerful technique for solving equations numerically. Like so much of the differential calculus, it is based on the simple idea of linear approximation. The Newton Method, properly used, usually homes in on a root with devastating efficiency. The Newton-Raphson Method Newton-Raphson method. Newton-Raphson method is the most effective method for finding roots by iteration fx = 0. The method consists of the following steps: 1. Pick a point x 0 close to a root. Find the corresponding point x 0, f x 0 on the curve.. Draw the tangent line to the curve at that point, and see where it crosses the x-axis. 3. The crossing point, x 1, is your next guess. Repeat the process starting from that point. In fact there are many ways to improve this numerical search for the root. In this section we examine one of the best methods: the Newton-Raphson method. To obtain the method we examine the general characteristics of a curve in the neighborhood of a simple root. 4
5 Figure 1: Example for a curve and its root x. Figure : Tangent of the curve at x 0, fx 0 cut Ox at x 1, 0..1 Geometric view point Consider the following diagram showing a function fx with a simple root at x = x whose value is required. Initial analysis has indicated that the root is approximately located at x = x 0. The aim of any numerical procedure is to provide a better estimate to the location of the root. The basic premise of the Newton-Raphson method is the assumption that the curve in the close neighborhood of the simple root at x is approximately a straight line. Hence if we draw the tangent to the curve at x 0, this tangent will intersect the xaxis at a point closer to x than is x 0 : From the geometry of this diagram we see that x 1 = x 0 P Q 5
6 But from the right-angled triangle PQR we have and so RQ P Q = tan θ = f x 0 P Q = RQ f x 0 = f x 0 f x 0 x 1 = x 0 f x 0 f x 0 If fx has a simple root near x 0 then a closer estimate to the root is x 1 where x 1 = x 0 f x 0 f x 0 This formula can be used time and time again giving rise to the following: The Newton-Raphson iterative formula. If fx has a simple root near x n then a closer estimate to the root is x n+1 where x n+1 = x n f x n f x n This is the Newton-Raphson iterative formula. The iteration is begun with an initial estimate of the root, x 0, and continued to find x 1, x,... until a suitably accurate estimate of the position of the root is obtained.. Analytical view point We suppose that f is a C function on a given interval, then using Taylors expansion near x f x + h = f x + hf x + O h and if we stop at the first order linearization of the equation, we are looking for a small h such as f x + h = 0 f x + hf x giving h = fx f x x + h = x fx f x 6
7 3 Selected examples Example. f x = x + ln x has a root near x = 1.5. Use the Newton- Raphson formula to obtain a better estimate. Solution. Here x 0 = 1.5, f 1.5 = ln 1.5 = f x = x, f 1.5 = = 5 3 Hence using the formula x 1 = = The Newton-Raphson formula can be used again: this time beginning with as our initial estimate. This time use: x = x 1 f x 1 f f = x 1 f ln = = This is in fact the correct value of the root to 4 d.p. 4 The Newton-Raphson method has fallen! 4.1 The Newton method can go bad Once the Newton Method catches scent of the root, it usually hunts it down with amazing speed. But since the method is based on local information, namely fx n and f x n, the Newton Method s sense of smell is deficient. If the initial estimate is not close enough to the root, the Newton Method may not converge, or may converge to the wrong root The successive estimates of the Newton Method may converge to the root too slowly, or may not converge at all. 4. Drawbacks The Newton-Raphson methods has some drawbacks. 1. It cannot handle multiple roots.. It has slow convergence compared with newer techniques. 3. The solution may diverge near a point of inflection. 4. The solution might oscillates new local minima or maxima. 5. With near-zero slope, the solution may diverge or reach a different root. 7
8 4.3 What is wrong with Newton-Raphson The Newton-Raphson does not always work. Consider the function defined by { x x, if x 0 f x = 0, if x = 0 Easy to prove that f is continuous. The derivative of this function is f x = 1 x, x 0 If we choose any starting point off the actual root, x 1 = a 0, then If follows that x = a x n = a a 1 a = a a = a { a, if n odd a, if n even So the Newton-Raphson is completely failed for this function. A more common occurrence is that NewtonRaphson works for some choices of starting point but not for others, and when it does work it does not necessarily take you to the closest root. We consider the function defined by f x = sin x x, f x = cos x 1 This function has three roots. For this function, the Newton-Raphson method uses the iteration x k+1 = x k sin x k x k cos x k 0.5 If we start with x 1 =, we quickly approach the rightmost root: x 1 = x = x 3 = x 4 = x 5 = x 6 = But if we start with x 1 = 1, x 1 = 1.1, x 1 = 1.01, x 1 = 1.0: This does not mean that the Newton-Raphson method is no good. Even today it is one of the most useful and powerful tools available for finding roots. But as we have seen, it can have problems. We need further analysis of how and why it works if we want to determine when we can use it safely and when we must 8
9 proceed with caution. Example. Take f : R R, x x x + 1 and x 0 = 1. As f x = x 1, x 1 = 1 f1 f 1 = = 0 and x = 0 f0 f 0 = = 1. It follows that { 1, if n is even x n = 0, if n is odd Thus {x n } does not converge. 4.4 Newton-Raphson method is not always applicable Newton s method has one small flaw, though. To apply the method you have to be able to compute the derivative f x. At first, you might think that this not such a big deal. Almost any reasonable function that one can write down can be differentiated, so the derivative step doesn t look like a problem. The problem in practice is that functions come in many forms, and not all of these forms lend themselves to computing derivatives. Here are several different ways that functions can be defined. 1. The function is defined via a closed form formula involving elementary functions. f x = ex + x sin x. The function is defined via an integral f x = x 1 sin t dt t 3. The function is defined via a convergent power series. f x = n=0 1 n n + 1 xn 4. The function is the solution to a differential equation y + yy = e x ; y 0 = 1; y 0 = 0 5. The function is defined recursively f x + f x + 10, if 10 < x < 10 f x =, if x 10, if x 10 Only about the first two and a half of these methods produce functions whose derivatives can be readily computed. In the absence of derivative information we can deploy some alternative algorithms. 9
10 5 Analysis and explain Let r be the actual though unknown value of a root that we are trying to approach, and let x k be our latest guess. We assume that it is pretty close. We can calculate fx k and f x k. While we do not know the value of r, the fact that it is a root implies that fr = 0. Since x k is pretty close to r, f x k will be pretty close to the slope of the line from x k, fx k to r, 0: f x k f r f x k r x k = f x k r x k 5.1 We can solve this to get something that is pretty close to r. r x k f x k f x k 5. As our last example showed, pretty close can sometimes be not close at all. Equation 5.1 shows that we are using an approximation to the derivative. Equation 5. this approximation in the denominator, and there lies the crux of our problem. We need to know the size of the error in equation 5.. We will use Lagrange s remainder for the Taylor series to solve this trouble. We use the equality f x = f a + f a x a + f c x a 5.3! where c is some unknown constant between a and x. Equivalent to f f x f a a = f c x a 5.4 x a! The error is precisely f c x a. Although we do not know the value of c, it may be possible to find bounds on f c when c is between a and x, and thus find bounds on the error. We replace a with x k and x with r, and then solve for r, keeping the rx k term in the error: f x k = fx k r x k f c! r x k r x k = fx k f x k f c f x k r x k r = x k fx k f x k f c f x k r x k = x k+1 f c f x k r x k 5.5 Equation 5.5 gives us a relationship between r x k+1 and r x k r x k+1 = f c f x k r x k 5.6 We shall get closer to r provided f c f x k r x k < 1 or, equivalently f c < f x k r x k 10
11 6 Convergence s conditions Theorem. Let r be a root of fx = 0, where f is a C function on an interval containing r, and we suppose that f r > 0, then Newton s iteration will converge to r if the starting point x 0 is close enough to r. This theorem insure that Newton s method will always converge if the initial point is sufficiently close to the root and if this root if not singular that is fr is non zero. This process has the local convergence property. A more constructive theorem was given by Kantorovich, we give it in the particular case of a function of a single variable x. Theorem. Kantorovich. Let f be a C numerical function from an open interval I of the real line, let x 0 be a point in this interval, then if there exist finite positive constants m 0, M 0, K 0 such as 1 f x 0 < m 0 fx0 f x 0 < M 0 f x 0 < K 0 and if h 0 = m 0 M 0 K 0 1, Newton s iteration will converge to a root r of fx = 0, and x n r < 1 n M 0 h n 1 0 This theorem gives sufficient conditions to insure the existence of a root and the convergence of Newton s process. More if h 0 < 1, the last inequality shows that the convergence is quadratic the number of good digits is doubled at each iteration Note that if the starting point x 0 tends to the root r, the constant M 0 will tend to zero and h 0 will become smaller than 1, so the local convergence theorem is a consequence of Kantorovich s theorem. 7 Improvements We will focus on these things: Some improvements for Newton-Raphson method. Other iterations and method. Prepare for generalization. 7.1 Cubic iteration Newton s iteration may be seen as a first order method or linearization method, it s possible to go one step further and write the Taylor expansion of f to a higher 11
12 order f x + h = f x + hf x + h f x + O h 3 and we are looking for h such as f x + h = 0 f x + hf x + h f x we take the smallest solution for h we have to suppose that f x and f x are non zero h = f x f 1 1 f x f x x f x It s not necessary to compute the square root, because if fx is small, using the expansion 1 1 α = α + α 8 + O α 3 h becomes h = f x f 1 + f x f x x f x + The first attempt to use the second order expansion is due to the astronomer E. Halley in Householder s iteration The previous expression for h, allows to derive the following cubic iteration the number of digits triples at each iteration, starting with x 0 x n+1 = x n f x n f 1 + f x n f x n x n f x n It can be efficiently used to compute the inverse or the square root of a number. Another similar cubic iteration may be given by f x n f x n x n+1 = x n f x n f x n f x n sometimes known as Halley s method. We may also write it as 1 x n+1 = x n f x n f x n fxnf xn f xn = x n fxn f x n 1 fxnf 1 x n f x n Note that if we replace 1 α 1 by 1 + α + O α, we retrieve Householder s iteration. 1
13 8 High order iteration 8.1 Householder s methods Under some conditions of regularity of f and its derivative, Householder gave the general iteration x n+1 = x n + p f p 1 f p+1 p 1 where p is an integer and f is the derivative of order p of the inverse of the function f. This iteration has convergence of order p +. For example p = 0 has quadratic convergence order and the formula gives back Newton s iteration while p = 1 has cubical convergence order 3 and gives again Halley s method. Just like Newton s method a good starting point is required to insure convergence. Using the iteration with p =, gives the following iteration which has quartical convergence order 4 f x n fxnf x n x n+1 = x n + f x n f x n 3 f x n f x n f x n + f 3 x nf x n 6 8. Modified methods Another idea is to write x n+1 = x n + h n + a n x n h n! + h n an 3 3 3! + where h n = fxn f x is given by the simple Newton s iteration and n an, a n 3,... are real parameters which we will estimate in order to minimize the value of f x n+1 : f x n+1 = f x n + h n + a n h n! + h n an 3 3 3! + We assume that f is regular enough and h n + a n using the expansion of f near x n f x n+1 = f x n + h n + a n h n! + a n 3 + h n + a n h n! + a n h n 3 3 3! + f x n + h n! + a n h n 3 3 3! + is small, hence h n 3 3! + f x n + and because f x n +h n f x n = 0, we have f x n+1 = a n f x n + f x n h n! + a n 3 f x n + 3a n f x n + f 3 x n h 3 n 3! + O h n 4. 13
14 A good choice for the a n i is clearly to cancel as many terms as possible in the previous expansion, so we impose a n = f x n f x n a n 3 = f x nf 3 x n+3f x n f x n a n 4 = f x n f 4 x n+10f x nf x nf 3 x n 15f x n 3 f x n a n 5 =... The formal values of the a n i may be computed for much larger values of i. Finally the general iteration is x n+1 = x n + h n 1 + a n h n! + a n 3 hn 3! + a n 4 hn 4! + = x n fxn f x n 1 + f x n fxn!f x n f x n + 3f x n f x nf 3 x n 3!f x n fxn f x n + For example if we stop at a n 3 and set a n 4 = a n 5 =... = 0, we have the helpful quartic modified iteration note that this iteration is different than the previous Householders quartic method x n+1 = x n f x n 1 f + f x n f x n f x n 3f x n f x n f 3 x n x n!f x n + 3!f x n 4 and if we omit a n 3, we retrieve Householder s cubic iteration. It s also possible to find the expressions for a n 4, a n 5, a n 6, a n 7,..., and define quintic, sextic, septic, octic... iterations. 9 Generalization Having indicated some of the difficulties with the Newton-Raphson method, we show when the method can be applied. Theorem. Let f : R R be a continuous function which has precisely one zero, a, in the open interval u, v. If f is continuous on u, v and f a 0, then there is a δ > 0 such that if a δ < x 0 < a+δ, then {x n }, then sequence of successive approximations generated by the Newton-Raphson method, converges to a. Proof. Since f has a second derivative on u, v, f is continuous on u, v. Since u < a < v and f a 0: µ > 0, u < a µ < a + µ < v, a µ < x < a + µ f x 0 The function g : a µ, a + µ R, x x f x f x 14
15 is continuously differentiable because f has a continuous second derivative and f x 0 on a µ, a + µ. Moreover, ga = a, and g x = 1 f x f x f x f x whence g a = 0. Since g is continuous, there is a δ with 0 < δ µ and g x < 1 whenever a δ < x < a + δ. Take such an x. Then g x a = g x g a = g c x a 1 δ < δ for some a δ < c < a + δ. Thus f x a δ, a + δ, so that we may regard g as a function a δ, a + δ a δ, a + δ Thus a δ < x n < a + δ for every term x n of the sequence {x n } where x 0 a δ, a + δ and x n+1 = g x n, n N. For x < y a δ, a + δ for some x < c < y. In particular It follows that for all j N Hence g x g y = g c x y 1 x y x n+ x n+1 = g x n+1 g x n 1 x n+1 x n x n+k x n k 1 x n+j+1 x n+j j=0 j 1 x n+1 x n x n+j+1 x n+j < k 1 j=0 = 1 1 k x 1 1 n+1 x n < x n+1 x n < 1 n x1 x 0 = 1 n 1 g x0 x 0 1 j xn+1 x n If gx 0 = x 0, then fx 0 = 0 and so, by hypothesis, x 0 = a. Otherwise g x 0 x 0 0, and the above calculation shows that {x n } is a Cauchy sequence. Hence, it has a limit b. Then g b = g lim n x n = lim n g x n = lim n x n+1 = b Since gb = b, fb = 0 and so, by hypothesis, b = a. 15
16 10 Convergence of Newton-Raphson method Like all fixed point iteration methods, Newton s method may or may not converge in the vicinity of a root. In addition, the convergence of fixed point iteration methods is guaranteed only if gx < 1 in some neighborhood of the root. Even Newton s method can not always guarantee that condition. When the condition is satisfied, Newton s method converges, and it also converges faster than almost any other alternative iteration scheme based on other methods of converting the original fx to a function with a fixed point. In order to start to get a handle on why Newton s method is unusually effective for a fixed point iteration, we start with a couple of definitions. Definition. A sequence of fixed-point iterates p n = g p n 1 converges linearly to a limiting value p if there exists a constant 0 < λ < 1 and a positive integer N such that p n+1 p < λ p n p, n > N Definition. A sequence of fixed-point iterates p n = g p n 1 converges quadratically to a limiting value p if there exists a constant 0 < λ and a positive integer N such that p n+1 p < λ p n p, n > N Both of these definitions state that the distance from p n to p shrinks as we progress through the sequence. The shrinkage is much more dramatic in the second case due to the presence of the square term. The fixed point theorem we saw in the last lecture is sufficient to guarantee linear convergence provided that certain simple conditions on gx are satisfied. Unfortunately, that theorem does not guarantee quadratic convergence. For that we need something special. Theorem. Quadratic Convergence Theorem. Let p be a fixed point of a function gx. If g p = 0 and g x is continuous with g x < M on an open interval p δ, p+δ any iterated sequence starting from a p 0 p δ, p + δ will converge quadratically to p. Moreover, for large n we will have p n+1 p < M p n p 16
17 Proof. Expanding gx in a first order Taylor polynomial about x = p gives g x = g p + g p x p + g ξ x p where ξ is some point between x and p. Noting that p is fixed point and that g p = 0 gives g x = p + g ξ x p Substituting p n for x and rearranging gives p n+1 p = g ξ p n p where ξ n is a point between p and p n. Since g p = 0 and g x is continuous near p we can conclude that g x < 1 for all x in some neighborhood of p. If we choose δ to make the interval p δ, p + δ fit inside that interval we can use the original fixed point theorem to conclude that the sequence of p n points converges to p. Since the ξ n points are trapped between p and p n they also converges to p. Thus, g ξ n < M for n large enough. It follows that p n+1 p < M p n p for n large enough and the sequence of p n points converges quadratically to p. Newton-Raphson method converges quadratically The Newton-Raphson iteration function g x = x f x f x satisfies the condition that g p = 0 at the fixed point. In cases when it also satisfies the restriction that g x < M on an open interval p δ, p + δ we have enough to guarantee quadratic convergence of the Newton-Raphson method sequence. 11 Newton s method for several variables Newton s method may also be used to find a root of a system of two or more non linear equations { f x, y = 0 g x, y = 0 17
18 where f and g are C functions on a given domain. Using Taylor s expansion of the two functions near x, y we find f x + h, y + k = f x, y + h f x + k f y + O h + k g x + h, y + k = g x, y + h g x + k g y + O h + k and if we keep only the first order terms, we are looking for a couple h, k such as f x + h, y + k = 0 f x, y + h f x + k f y g x + h, y + k = 0 g x, y + h g x + k g y hence its equivalent to the linear system h k equivalent to f x g x f y g y h J x, y k This suggest to define the new process xn+1 xn = y n+1 f x, y = g x, y f x, y = g x, y y n J 1 x n, y n f xn, y n g x n, y n starting with an initial guess x 0, y 0 and under certain conditions which are not so easy to check and this is again the main disadvantage of the method, it s possible to show that this process converges to a root of the system. The convergence remains quadratic. Example. We are looking for a root near x 0 = 0.6, y 0 = 0.6 of the following system { f x, y = x 3 3xy 1 g x, y = 3x y y 3 here the Jacobian and its inverse become x J x n, y n = 3 n yn x n y n x n y n x n yn J 1 1 x x n, y n = n yn x n y n 3x n +y n x n y n x n yn and the process gives x 1 = , y 1 = x = , y = x 3 = , y 3 = x 4 = , y 4 = x 5 = , y 5 =
19 Depending on your initial guess Newton s process may converge to one of the three roots of the system: 1 3,, 1 3,, 1, 0 and for some values of x 0, y 0 the convergence of the process may be tricky! The study of the influence of this initial guess leads to aesthetic fractal pictures. Cubic convergence also exists for several variables system of equations : Chebyshev s method. 1 Extension to systems of equations or The Newton-Raphson method becomes f 1 x 1,..., x n = 0... f m x 1,..., x n = 0 fx = 0 x n+1 = x n J 1 x n f x n, n = 0, 1,... The end 19
20 References [1] Adi Ben-Israel, A Newton-Raphson Method for the Solution of Systems of Equations, Technion-Israel Institute of Technology and Northwestern University, Journal of Mathematical Analysis and Applications 15, [] Helm, Workbook Level 1, The Newton-Raphson Method, March 18, 004. [3] Aaron Burton, Newton s method and fractals. [4] David M. Bressoud, Newton-Raphson Method, Appendix to A Radical Approach to Real analysis nd edition, June 0, 006. [5] Pascal Sebah, Xavier Gourdon, Newton s method and high order iterations, October 3, 001. [6] Ibrahim A. Assakkaf, Numerical methods for engineers, Computation Method in Civil Engineering II, Department of Civil and Environment Engineering, University of Maryland, College Park, Spring 001. [7] Autar Kaw, Newton-Raphson method of solving nonlinear equations, December 3,
A short presentation on Newton-Raphson method
A short presentation on Newton-Raphson method Doan Tran Nguyen Tung 5 Nguyen Quan Ba Hong 6 Students at Faculty of Math and Computer Science. Ho Chi Minh University of Science, Vietnam email. nguyenquanbahong@gmail.com
More informationChapter 3: Root Finding. September 26, 2005
Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division
More informationPART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435
PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu
More informationMath Numerical Analysis
Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center
More information1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that
Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a
More informationMath 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS
Math 473: Practice Problems for Test 1, Fall 011, SOLUTIONS Show your work: 1. (a) Compute the Taylor polynomials P n (x) for f(x) = sin x and x 0 = 0. Solution: Compute f(x) = sin x, f (x) = cos x, f
More informationOutline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,
Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This
More informationNumerical Methods in Physics and Astrophysics
Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation
More informationx 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.
Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written
More informationNumerical Methods in Physics and Astrophysics
Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation
More informationDesign and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras
Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras Lecture - 09 Newton-Raphson Method Contd We will continue with our
More informationPart 3.3 Differentiation Taylor Polynomials
Part 3.3 Differentiation 3..3.1 Taylor Polynomials Definition 3.3.1 Taylor 1715 and Maclaurin 1742) If a is a fixed number, and f is a function whose first n derivatives exist at a then the Taylor polynomial
More informationSOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD
BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative
More informationX. Numerical Methods
X. Numerical Methods. Taylor Approximation Suppose that f is a function defined in a neighborhood of a point c, and suppose that f has derivatives of all orders near c. In section 5 of chapter 9 we introduced
More informationChapter 1. Root Finding Methods. 1.1 Bisection method
Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This
More informationROOT FINDING REVIEW MICHELLE FENG
ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one
More informationNUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.
NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.
More informationUnit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright
cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now
More informationSolution of Algebric & Transcendental Equations
Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection
More informationCHAPTER-II ROOTS OF EQUATIONS
CHAPTER-II ROOTS OF EQUATIONS 2.1 Introduction The roots or zeros of equations can be simply defined as the values of x that makes f(x) =0. There are many ways to solve for roots of equations. For some
More informationSolution of Nonlinear Equations
Solution of Nonlinear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 14, 017 One of the most frequently occurring problems in scientific work is to find the roots of equations of the form f(x) = 0. (1)
More informationMath Numerical Analysis
Math 541 - Numerical Analysis Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State University
More information1. Introduction. 2. Outlines
1. Introduction Graphs are beneficial because they summarize and display information in a manner that is easy for most people to comprehend. Graphs are used in many academic disciplines, including math,
More informationNumerical Methods in Informatics
Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch
More informationIntegration, differentiation, and root finding. Phys 420/580 Lecture 7
Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:
More informationNumerical Study of Some Iterative Methods for Solving Nonlinear Equations
International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 5 Issue 2 February 2016 PP.0110 Numerical Study of Some Iterative Methods for Solving Nonlinear
More information1 Question related to polynomials
07-08 MATH00J Lecture 6: Taylor Series Charles Li Warning: Skip the material involving the estimation of error term Reference: APEX Calculus This lecture introduced Taylor Polynomial and Taylor Series
More information100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX
100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX.1 Norms If we have an approximate solution at a given point and we want to calculate the absolute error, then we simply take the magnitude
More informationV. Graph Sketching and Max-Min Problems
V. Graph Sketching and Max-Min Problems The signs of the first and second derivatives of a function tell us something about the shape of its graph. In this chapter we learn how to find that information.
More informationExtended Introduction to Computer Science CS1001.py. Lecture 8 part A: Finding Zeroes of Real Functions: Newton Raphson Iteration
Extended Introduction to Computer Science CS1001.py Lecture 8 part A: Finding Zeroes of Real Functions: Newton Raphson Iteration Instructors: Benny Chor, Amir Rubinstein Teaching Assistants: Yael Baran,
More informationInfinite series, improper integrals, and Taylor series
Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions
More informationNumerical Methods. Root Finding
Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real
More informationChapter 8B - Trigonometric Functions (the first part)
Fry Texas A&M University! Spring 2016! Math 150 Notes! Section 8B-I! Page 79 Chapter 8B - Trigonometric Functions (the first part) Recall from geometry that if 2 corresponding triangles have 2 angles of
More informationMATH 2053 Calculus I Review for the Final Exam
MATH 05 Calculus I Review for the Final Exam (x+ x) 9 x 9 1. Find the limit: lim x 0. x. Find the limit: lim x + x x (x ).. Find lim x (x 5) = L, find such that f(x) L < 0.01 whenever 0 < x
More informationTaylor and Maclaurin Series. Approximating functions using Polynomials.
Taylor and Maclaurin Series Approximating functions using Polynomials. Approximating f x = e x near x = 0 In order to approximate the function f x = e x near x = 0, we can use the tangent line (The Linear
More informationMAT137 Calculus! Lecture 45
official website http://uoft.me/mat137 MAT137 Calculus! Lecture 45 Today: Taylor Polynomials Taylor Series Next: Taylor Series Power Series Definition (Power Series) A power series is a series of the form
More informationChapter 8: Taylor s theorem and L Hospital s rule
Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))
More informationSTOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1
53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas
More informationLec7p1, ORF363/COS323
Lec7 Page 1 Lec7p1, ORF363/COS323 This lecture: One-dimensional line search (root finding and minimization) Bisection Newton's method Secant method Introduction to rates of convergence Instructor: Amir
More information1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =
Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values
More informationExam 3 MATH Calculus I
Trinity College December 03, 2015 MATH 131-01 Calculus I By signing below, you attest that you have neither given nor received help of any kind on this exam. Signature: Printed Name: Instructions: Show
More informationNumerical Methods. King Saud University
Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation
More information1 Lecture 25: Extreme values
1 Lecture 25: Extreme values 1.1 Outline Absolute maximum and minimum. Existence on closed, bounded intervals. Local extrema, critical points, Fermat s theorem Extreme values on a closed interval Rolle
More informationMTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.
MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:
More information1.4 Techniques of Integration
.4 Techniques of Integration Recall the following strategy for evaluating definite integrals, which arose from the Fundamental Theorem of Calculus (see Section.3). To calculate b a f(x) dx. Find a function
More informationPower series and Taylor series
Power series and Taylor series D. DeTurck University of Pennsylvania March 29, 2018 D. DeTurck Math 104 002 2018A: Series 1 / 42 Series First... a review of what we have done so far: 1 We examined series
More informationNumerical Analysis: Solving Nonlinear Equations
Numerical Analysis: Solving Nonlinear Equations Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a
More informationAlgebra Exam. Solutions and Grading Guide
Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full
More informationWe are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero
Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.
More informationLet s Get Series(ous)
Department of Mathematics, Computer Science, and Statistics Bloomsburg University Bloomsburg, Pennsylvania 785 Let s Get Series(ous) Summary Presenting infinite series can be (used to be) a tedious and
More informationWe have been going places in the car of calculus for years, but this analysis course is about how the car actually works.
Analysis I We have been going places in the car of calculus for years, but this analysis course is about how the car actually works. Copier s Message These notes may contain errors. In fact, they almost
More informationDefinitions & Theorems
Definitions & Theorems Math 147, Fall 2009 December 19, 2010 Contents 1 Logic 2 1.1 Sets.................................................. 2 1.2 The Peano axioms..........................................
More informationNumerical Methods of Approximation
Contents 31 Numerical Methods of Approximation 31.1 Polynomial Approximations 2 31.2 Numerical Integration 28 31.3 Numerical Differentiation 58 31.4 Nonlinear Equations 67 Learning outcomes In this Workbook
More informationINTRODUCTION TO NUMERICAL ANALYSIS
INTRODUCTION TO NUMERICAL ANALYSIS Cho, Hyoung Kyu Department of Nuclear Engineering Seoul National University 3. SOLVING NONLINEAR EQUATIONS 3.1 Background 3.2 Estimation of errors in numerical solutions
More informationLecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.
Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. For example f(x) = 1 1 x = 1 + x + x2 + x 3 + = ln(1 + x) = x x2 2
More informationMATH 1902: Mathematics for the Physical Sciences I
MATH 1902: Mathematics for the Physical Sciences I Dr Dana Mackey School of Mathematical Sciences Room A305 A Email: Dana.Mackey@dit.ie Dana Mackey (DIT) MATH 1902 1 / 46 Module content/assessment Functions
More informationYURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL
Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions
More informationNumerical Methods. Roots of Equations
Roots of Equations by Norhayati Rosli & Nadirah Mohd Nasir Faculty of Industrial Sciences & Technology norhayati@ump.edu.my, nadirah@ump.edu.my Description AIMS This chapter is aimed to compute the root(s)
More information3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.
3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)
More informationTaylor series. Chapter Introduction From geometric series to Taylor polynomials
Chapter 2 Taylor series 2. Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Such series can be described informally as infinite
More informationGoals for This Lecture:
Goals for This Lecture: Learn the Newton-Raphson method for finding real roots of real functions Learn the Bisection method for finding real roots of a real function Look at efficient implementations of
More informationScientific Computing. Roots of Equations
ECE257 Numerical Methods and Scientific Computing Roots of Equations Today s s class: Roots of Equations Polynomials Polynomials A polynomial is of the form: ( x) = a 0 + a 1 x + a 2 x 2 +L+ a n x n f
More informationMATH1013 Calculus I. Revision 1
MATH1013 Calculus I Revision 1 Edmund Y. M. Chiang Department of Mathematics Hong Kong University of Science & Technology November 27, 2014 2013 1 Based on Briggs, Cochran and Gillett: Calculus for Scientists
More informationMATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series.
MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series. The objective of this section is to become familiar with the theory and application of power series and Taylor series. By
More information8.5 Taylor Polynomials and Taylor Series
8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:
More information8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0
8.7 Taylor s Inequality Math 00 Section 005 Calculus II Name: ANSWER KEY Taylor s Inequality: If f (n+) is continuous and f (n+) < M between the center a and some point x, then f(x) T n (x) M x a n+ (n
More informationSection 4.2: The Mean Value Theorem
Section 4.2: The Mean Value Theorem Before we continue with the problem of describing graphs using calculus we shall briefly pause to examine some interesting applications of the derivative. In previous
More informationLecture 9: Taylor Series
Math 8 Instructor: Padraic Bartlett Lecture 9: Taylor Series Week 9 Caltech 212 1 Taylor Polynomials and Series When we first introduced the idea of the derivative, one of the motivations we offered was
More informationLast Update: March 1 2, 201 0
M ath 2 0 1 E S 1 W inter 2 0 1 0 Last Update: March 1 2, 201 0 S eries S olutions of Differential Equations Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections
More informationIntroduction to Numerical Analysis
Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes
More informationIntro to Scientific Computing: How long does it take to find a needle in a haystack?
Intro to Scientific Computing: How long does it take to find a needle in a haystack? Dr. David M. Goulet Intro Binary Sorting Suppose that you have a detector that can tell you if a needle is in a haystack,
More informationMath 113 (Calculus 2) Exam 4
Math 3 (Calculus ) Exam 4 November 0 November, 009 Sections 0, 3 7 Name Student ID Section Instructor In some cases a series may be seen to converge or diverge for more than one reason. For such problems
More informationMATH 100 and MATH 180 Learning Objectives Session 2010W Term 1 (Sep Dec 2010)
Course Prerequisites MATH 100 and MATH 180 Learning Objectives Session 2010W Term 1 (Sep Dec 2010) As a prerequisite to this course, students are required to have a reasonable mastery of precalculus mathematics
More informationSection 1.4 Tangents and Velocity
Math 132 Tangents and Velocity Section 1.4 Section 1.4 Tangents and Velocity Tangent Lines A tangent line to a curve is a line that just touches the curve. In terms of a circle, the definition is very
More informationMain topics for the First Midterm Exam
Main topics for the First Midterm Exam The final will cover Sections.-.0, 2.-2.5, and 4.. This is roughly the material from first three homeworks and three quizzes, in addition to the lecture on Monday,
More informationCHAPTER 10 Zeros of Functions
CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems
More informationAP Calculus Testbank (Chapter 9) (Mr. Surowski)
AP Calculus Testbank (Chapter 9) (Mr. Surowski) Part I. Multiple-Choice Questions n 1 1. The series will converge, provided that n 1+p + n + 1 (A) p > 1 (B) p > 2 (C) p >.5 (D) p 0 2. The series
More informationSequences. Chapter 3. n + 1 3n + 2 sin n n. 3. lim (ln(n + 1) ln n) 1. lim. 2. lim. 4. lim (1 + n)1/n. Answers: 1. 1/3; 2. 0; 3. 0; 4. 1.
Chapter 3 Sequences Both the main elements of calculus (differentiation and integration) require the notion of a limit. Sequences will play a central role when we work with limits. Definition 3.. A Sequence
More informationChapter 11 - Sequences and Series
Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a
More informationSeries Solutions. 8.1 Taylor Polynomials
8 Series Solutions 8.1 Taylor Polynomials Polynomial functions, as we have seen, are well behaved. They are continuous everywhere, and have continuous derivatives of all orders everywhere. It also turns
More informationWeek 2 Techniques of Integration
Week Techniques of Integration Richard Earl Mathematical Institute, Oxford, OX LB, October Abstract Integration by Parts. Substitution. Rational Functions. Partial Fractions. Trigonometric Substitutions.
More informationARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1
AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One
More informationSyllabus for BC Calculus
Syllabus for BC Calculus Course Overview My students enter BC Calculus form an Honors Precalculus course which is extremely rigorous and we have 90 minutes per day for 180 days, so our calculus course
More informationGENG2140, S2, 2012 Week 7: Curve fitting
GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and
More informationNewton-Raphson Type Methods
Int. J. Open Problems Compt. Math., Vol. 5, No. 2, June 2012 ISSN 1998-6262; Copyright c ICSRS Publication, 2012 www.i-csrs.org Newton-Raphson Type Methods Mircea I. Cîrnu Department of Mathematics, Faculty
More informationName: AK-Nummer: Ergänzungsprüfung January 29, 2016
INSTRUCTIONS: The test has a total of 32 pages including this title page and 9 questions which are marked out of 10 points; ensure that you do not omit a page by mistake. Please write your name and AK-Nummer
More informationTaylor and Maclaurin Series. Approximating functions using Polynomials.
Taylor and Maclaurin Series Approximating functions using Polynomials. Approximating f x = e x near x = 0 In order to approximate the function f x = e x near x = 0, we can use the tangent line (The Linear
More information5 Finding roots of equations
Lecture notes for Numerical Analysis 5 Finding roots of equations Topics:. Problem statement. Bisection Method 3. Newton s Method 4. Fixed Point Iterations 5. Systems of equations 6. Notes and further
More informationAnswer Key 1973 BC 1969 BC 24. A 14. A 24. C 25. A 26. C 27. C 28. D 29. C 30. D 31. C 13. C 12. D 12. E 3. A 32. B 27. E 34. C 14. D 25. B 26.
Answer Key 969 BC 97 BC. C. E. B. D 5. E 6. B 7. D 8. C 9. D. A. B. E. C. D 5. B 6. B 7. B 8. E 9. C. A. B. E. D. C 5. A 6. C 7. C 8. D 9. C. D. C. B. A. D 5. A 6. B 7. D 8. A 9. D. E. D. B. E. E 5. E.
More informationRoot Finding Convergence Analysis
Root Finding Convergence Analysis Justin Ross & Matthew Kwitowski November 5, 2012 There are many different ways to calculate the root of a function. Some methods are direct and can be done by simply solving
More informationSECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes.
SECTION A 1. State the maximal domain and range of the function f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes. 2. By evaluating f(0),
More informationOutline schemes of work A-level Mathematics 6360
Outline schemes of work A-level Mathematics 6360 Version.0, Autumn 013 Introduction These outline schemes of work are intended to help teachers plan and implement the teaching of the AQA A-level Mathematics
More informationRichard S. Palais Department of Mathematics Brandeis University Waltham, MA The Magic of Iteration
Richard S. Palais Department of Mathematics Brandeis University Waltham, MA 02254-9110 The Magic of Iteration Section 1 The subject of these notes is one of my favorites in all mathematics, and it s not
More informationQuadratics and Other Polynomials
Algebra 2, Quarter 2, Unit 2.1 Quadratics and Other Polynomials Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned Know and apply the Fundamental Theorem of Algebra
More informationMATH 1A, Complete Lecture Notes. Fedor Duzhin
MATH 1A, Complete Lecture Notes Fedor Duzhin 2007 Contents I Limit 6 1 Sets and Functions 7 1.1 Sets................................. 7 1.2 Functions.............................. 8 1.3 How to define a
More informationCore 1 Module Revision Sheet J MS. 1. Basic Algebra
Core 1 Module Revision Sheet The C1 exam is 1 hour 0 minutes long and is in two sections Section A (6 marks) 8 10 short questions worth no more than 5 marks each Section B (6 marks) questions worth 12
More informationUNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *
4.4 UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH 19 Discussion Problems 59. Two roots of a cubic auxiliary equation with real coeffi cients are m 1 1 and m i. What is the corresponding homogeneous
More informationAnnouncements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook!
Announcements Topics: - sections 5.2 5.7, 6.1 (extreme values) * Read these sections and study solved examples in your textbook! Homework: - review lecture notes thoroughly - work on practice problems
More information2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1
Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear
More information