CES739 Introduction to Iterative Methods in Computational Science, a.k.a. Newton s method

Size: px
Start display at page:

Download "CES739 Introduction to Iterative Methods in Computational Science, a.k.a. Newton s method"

Transcription

1 CES739 Introduction to Iterative Methods in Computational Science, a.k.a. Newton s method Y. Zinchenko March 27, 2008 Abstract These notes contain the abridged material of CES739. Proceed with caution. 1

2 CES739 Introduction to Iterative Methods in Computational Science Instructor Yuriy Zinchenko, ITB221 Course content Newton s method is a fundamental iterative tool in many scientific and engineering applications, and is often the method of choice for solving F (x) = 0, where F (x) : R n R n. In particular, Newton s method is of paramount importance in many optimization problems. This course is dedicated to theoretical analysis of Newton s method, with focus on its computational complexity. We will present three distinct perspectives on this iterative procedure: the classical Kantorovich analysis, self-concordant setting, and the Smale s analysis based on asymptotic behavior of the function s higher-order derivatives. The course will culminate with the constructive proof of Bezout s theorem. Meeting times Tentatively, the class is scheduled to meet on Thursdays, 10:00-13:00, starting February 28, at ITB AB105. Grading policy The final grade will be decided based on several assignments distributed throughout the course. Course layout Tentative course material distribution is as follows: introduction, Newton s method in one dimension, function s behavior in a neighborhood of a critical point and Kantorovich analysis, introducing more geometry the self-concordant setting, function s behavior at a single point and Smale s analysis, insuring global convergence, Bezout s theorem. 2

3 1 Lecture one Goal: understand Newton s method, namely, its computational complexity, or in rough terms when does it work and how many flop s does it take. Newton s method is the method of choice for solving f(x) = 0 (1) where f : R n R n or C n C n, f sufficiently smooth. Examples: Consider f : R R, in the simplest case we have f(x) = ax + b, a 0, then f( x) = 0 iff x = b a, f(x) = ax 2 + bx + c, have explicit formula for x s.t. f( x) = 0, f(x) = a m x m + a m 1 x m a 0, no formulas exist for m > 4 (more precisely, cannot be solved algebraically in terms of a finite number of additions, subtractions, multiplications, divisions, and root extractions). How do we go about solving f(x) = 0 in this case? For f C, one way to find a root of f(x) = 0 is to use a bisection algorithm: suppose we have α < β so that f(α) < 0, f(β) > 0, by continuity of f its root satisfies x [α, β]. Consider x = α+β and set α = x if 2 f(x) < 0, and β = x if f(x) > 0 (if neither is the case, x is the root). Note that x approximates the root x, we still have x [α, β], and after 1 k iterations β α is reduced by a factor of. This is not too bad (2 k 2 k grows quite fast), but can we do better, namely, can we have a factor of 1 (note that the polynomials above are C )? 2 2k Figure 1: Bisection for root finding. Recall: Using Taylor s expansion of f C m we can write f(x + x) = f(x) + f (x) x + f (x) x2 2! f (ξ) xm m!

4 where ξ [x, x + x]. Idea: Since f(x+ x) f(x)+f (x) x (and even more so as 0), and we are very comfortable solving linear equations, replace f(x + x) with its linearization, solve f(x) + f (x) x = 0 for x, set x := x + x and repeat the whole procedure again, until x is close enough to the root of f(x) = 0 (say, f(x) 0 or x remains almost the same for a few consecutive iterations). The above procedure is called Newton s method. To set the notation, let N (x) = x (f (x)) 1 f(x) (2) be called Newton s map. Then the Newton s method consists of iteratively applying N starting at some point x 0 to generate a sequence x 0, x 1 = N (x 0 ), x 2 = N (x 1 ) = N 2 (x 0 ),, x k = N k (x 0 ),, and we say Newton s method converges when the sequence above does. 1.1 Complex dynamics of (simple) Newton s method Consider f(z) = z 3 1 over C, f(z) = 0 has three roots z 1,2,3 = 1, 1 i ±. We want to understand how does Newton s method behave depending on the starting point z 0 C. For our particular choice of f(z) = z 3 1, N (z) = 2z z 2 Figure 2: Roots of f(z) = 0 on C. 4

5 First hypothesis: f(z) is analytic (and so is C ), thus δ > 0 may be chosen small enough to make a linearization of f(z + z), f(z) + f (z) z, arbitrary close to f(z + z) in a small enough neighborhood of a root z i, say z z i < δ, for all z, z + z < δ. In other words, the neighborhood of z i may be chosen small enough so that the linearization of f(z) is an extremely good surrogate for f(z) within this neighborhood. Note that f ( z i ) 0, so N (z) is well defined for all z : z z i < δ for δ small enough. Then it is natural to assume that Newton s method converges to the root z i in such a neighborhood (in fact, we will prove this in a bit). Such a neighborhood will be referred to as a basin of attraction of z i. A hypothetical question: If C is partitioned into only three such basins (one for each root), how would the picture look like? A natural simple guess about these basins would be, on the first pass, to assume the basins are connected components of C. Then noting the rotational symmetry of roots of f(z) (i.e., we can turn the picture clockwise by 2π without changing it), it is natural to guess that the partitioning 3 into the basins would look as follows: Figure 3: Guessing the partitioning of C: blue z 1, green z 2, yellow z 3. In fact, this is not a bad guess and is almost correct (if you look at the picture from far enough). So, what could we have missed? Observe that N (0) is not defined. Are there any more pathological points? Consider a pre-image of 0 under N, N 1 (0), giving rise to three { points 1 3 1, 2 ( 1 ± i )}, so N ( ) is not defined at either of these three points (after one iteration Newton s method stalls). By induction, it is conceivable to assume that the pre-image of 0 under N k has 3 k distinct points, etc. The moral of the story is that C is populated by many such 5

6 pathological points (assuming we would run Newton s methods until we either find a root or it breaks down), and the true dynamics of Newton s method is far more complicated as compared to what it first seems. The true basins look as follows (from hiddendimension.com website), corresponding to a fractal, which, in rough-terms, is a self-similar figure Figure 4: Partitioning of C into three basins of attraction. which resembles itself when magnified. We can also try to visualize the speed of convergence of Newton s method for finding the roots of f(z) = 0. One well-accepted procedure for doing so goes as follows. We will pick a starting point z 0 C and assign its color based on how well Newton s method progressed after K 1 iterations; starting with initial color s 0 = 0 ( we will update ) its value after 1 every iteration of N as s k+1 = s k + exp z k+1 z k, thus measuring the progress made by Newton s iterates. Note that if iterates are close together, meaning that the method probably start to converge, every new color increment would stay small due to the negative exponent. After the method makes K iterations, we would place a point z 0 on the graph coloring it with the color s K. The procedure can be used to color the part of the complex plain we are interested in. So, why Newton s method? If nothing else comes to mind, look at pretty pictures. Two MATLAB code fragments below illustrate Newton s method for f(z) = z 3 1 and compute approximate basin of attraction for the root z 1 = 1 and visualize the speed of convergence of N : fragment 1 clear all 6

7 Figure 5: Speed of convergence of Newton s method as a fractal. itermax=20; z=2*(cos(pi/3)+sin(pi/3)*i); figure, hold on, grid on, xlim([-2,2]); ylim([-2,2]); w=[0:.01*pi:2*pi]; plot(cos(w),sin(w), c-- ); plot(cos(2*pi/3),sin(2*pi/3), m* ); plot(cos(-2*pi/3),sin(-2*pi/3), m* ); plot(1,0, m* ); pause plot(real(z),imag(z), r* ); pause; plot(real(z),imag(z), b* ); for iter=1:itermax z = (2*z.^3 + 1)./(3*z.^2); plot(real(z),imag(z), r* ); pause plot(real(z),imag(z), b* ); end hold off fragment 2 clear all dx=.00001; xl=dx; xr=.005; itermax=100; eps=.01; [zr,zi] = meshgrid(xl:dx:xr,xl:dx:xr); z=complex(zr,zi); zcol=zeros(size(abs(z))); for iter=1:itermax zn = (2*z.^3 + 1)./(3*z.^2); zcol = zcol+exp(-abs(1./(zn-z))); z=zn; 7

8 end figure; mesh(zr,zi,zcol); view(2) indz=find((abs(z-1)<eps)); zr=zr(indz); zi=zi(indz); figure; plot(zr(:),zi(:), b. ) end 1.2 Few applications and basic local analysis Newton s method application examples: pictures as above, optimization; consider finding min x f(x) over R n, we assume f C 3 and convex; a necessary and sufficient condition for x to be a minimizer of f is f ( x) = 0, so we can apply Newton s method to search for a root of f (x) = 0, path-following, e.g., homotopy methods for finding roots of a polynomial; consider finding all roots x i, i = 1,..., m of a univariate polynomial p(x) of degree m, note that we can write p(x) = a m i (x x i); start with some other polynomial of degree m with all the roots known, e.g., q(x) = i (x i) and consider h(α; x) = αp(x) + (1 α)q(x), α [0, 1]; for a fixed α, h(α; x) is a polynomial of degree m and thus have m roots x i (α) and h(0; x) = q(x), h(1; x) = p(x); assuming that the roots of a polynomial are continuous functions of its coefficients, we may attempt to follow each of the curve x i(α) as α goes from 0 to 1 with Newton s method to ultimately compute the (approximate) roots x i, i = 1,..., m (consider incrementing α by a small perturbation ɛ > 0, if ɛ is small enough, the roots of h(α; x) = 0 must give a good approximation to the roots of h(α + ɛ; x) = 0, and thus each x i(α) would provide a good starting point for Newton s method to find an arbitrary-close approximation to x i (α + ɛ); now keep on incrementing α till you reach 1). Now it is time for our first take at local convergence analysis of the method. Theorem 1.1. Let f : R R, f C 2, and let x be such that f( x) = 0, f ( x) 0. Then δ > 0 such that x : x x < δ we have for some K(δ) > 0, and x N (x) K(δ) x x 2 N k (x) [ x δ, x + δ], k > 0. Proof. Note that since f ( x) 0 and f C 2, there exists δ > 0 such that f (x) 0 for x x < δ, and for some K(δ) > 0 we have f (x) < 2K(δ), f (x) 2δK(δ) < 1. With such δ in mind, at least N (x) is well-defined for any x : x x < δ. For compactness we omit the arguments x and x in subsequent notation when evaluating f, f, f simply writing f for f(x), f for f( x), etc. 8

9 Consider x N (x) = x f 1 f (x f 1 f) = ( x x) f 1 ( f f) = f 1 (f ( x x) ( f f)) and denoting f t : t f( x + t( x x)), with f t = f ( x + t( x x)) ( x x) and f t = f ( x + t( x x)) ( x x) 2, we further write 1 (f f 1 ( x x) f t(ξ)dξ) 0 ( 1 = f 1 (f t(0) f t(ξ))dξ) max χ [x, x] f (χ) 2f ( x x) 2 K(δ)( x x) 2. Finally, note that with the choice of δ as above, N (x) [ x δ, x + δ]. 0 Remark 1.1. Asymptotically we have K(δ) f ( x) as δ 0. 2f ( x) Remark 1.2. The theorem implies local quadratic convergence of Newton s method in a small neighborhood of the root; in rough terms, the number of correct digits in the approximation of x doubles every iteration. Remark 1.3. The assumption of f C 2 may be relaxed into f being Lipschitz continuous. 1.3 Assignments Due next week in class. Scaling-invariance of Newton s method: consider f : R n R n, f C 2 with non-zero derivative, A, B two invertible n n matrices. Show that, Newton s iterates for function f starting at a point x 0 are exactly the same as the Newton s iterates for a function Af(Bx) starting at a point B 1 x 0. Let f : R R be C m. We will consider the behavior of Newton s method around a root x of f when f ( x) = 0. x is called a root of multiplicity m if f (k) ( x) = 0, k = 0,..., m 1 and f (k) ( x) 0. Show that if f has a root x of multiplicity m, we can write f(x) = (x x) m h(x) where h is continuous and h( x) 0. Next, show that in the small enough neighborhood of x Newton s method will converge and asymptotically x x k+1 x x k m 1 (hint: follow the convergence m proof from the class). 1.4 Additional reading and references Fractals and Newton s method Newton s basics nice overview paper by T.B. Polyak, Newton Kantorovich method and its global convergence (google for it) 9

10 2 Lecture Kantorovich s analysis of Newton s method I Recall real vector space, metric and norm, Cauchy sequence and complete metric space, k th derivative of f : R n R n as a k-linear function, induced norm of f (k) as f (k) = an important consequence is that sup f (k) (x 1, x 2,..., x k ) ; x i =1,i=1,...,k f (k) (x 1, x 2,..., x k ) f (k) x i. Aside: If you are uncomfortable with f : R n R n and its (multivariate) derivatives, for now think of all the functions being univariate real functions (simply replace norms with absolute values); you can digest the proofs in R n on your own at a later point. A few general comments: Does the method always converge? No, even if f : R R, f C 2, f 0 and is subject to further nice restrictions as below, f is monotone, e.g., f(x) = arctan(x), far from x = 0 (in red), moreover, the method may exhibit cycling behavior, i Figure 6: Divergence of N k (x 0 ) for f(x) = arctan(x). f is convex, but no guaranteed x s.t. f( x) = 0, e.g., f(x) = x 3 x + 3 starting at x 0 = 0 (in blue), what about convex monotone f? Think about it. 10

11 Figure 7: Cycling of N k (x 0 ) for f(x) = x 3 x + 3. One of the strongest potential criticisms of Theorem 1.1 is that we need to know x, that is, we cannot say if the method actually started to converge quadratically or how close we are to x without known x upfront. The first comment above is usually addresses by introducing the so-called regularized Newton s method, while the second comment is addressed, in particular, with Kantorovich s approach this is what we attempt to understand next. The end result we are going to develop over the next two lectures is: Theorem 2.1. Let f be C 2 on {x : x x 0 < R}, R > r > 0. Assume If and there exists (f (x 0 )) 1 =: f 0 1, f 0 1 f(x 0 ) η, f 0 1 f (x 0 ) K on {x : x x 0 r}. h = Kη < 1 2 r 1 1 2h η, h then (1) has a root x so that N k (x 0 ) x, the Newton s iterates satisfy x N k (x 0 ) 1 2 k (2h)2k η h, k 0, and the root is unique in the ball {x : x x 0 r} if r < h h We start building towards Theorem 2.1 by first observing for f : R n R n, f C 2, η. x = N (x) (3) 11

12 implies f(x) = 0. In what follows we first analyze the equation of the form x = S(x) (4) and conditions for the convergence of a sequential approximation procedure generating x 0, x 1 = S(x 0 ), x 2 = S(x 1 ),..., x k+1 = S(x k ),..., (5) to generate an approximate to the root of the equation (4). Together with (4) we consider a univariate function Φ(t) and a corresponding equation t = Φ(t). (6) Let S : R n R n be C 1 defined on {x : x x 0 < R} and let Φ be differentiable on [t 0, t ] (t = t 0 + r < t 0 + R). We say (6) majorizes (4) or Φ majorizes S if S(x 0 ) x 0 Φ(t 0 ) t 0, (7) S (x) Φ (t), for x x 0 t t 0 r. (8) The function Φ will be used to measure the progress of the sequential approximation scheme (5); you can think of Φ as assigning a potential to an iterate x k that allows measure the proximity between x k and x k 1 in terms od Φ. We establish the following Theorem 2.2. Let S, Φ be as above with Φ majorizing S. Furthermore, suppose (6) has a solution in [t 0, t ]. Then (5) converges to a root x of (4), and x x 0 t t 0, where t is the smallest root of (6) in [t 0, t ]. Proof. We start by considering a sequential approximation scheme t k+1 = Φ(t k ), k = 0, 1, 2,... applied to (6). We will show that the sequence {t k } converges to the smallest root t of (6). Later in the proof we will use this convergent sequence to establish a convergence of (5) by exhibiting its majorizing Cauchy sequence. W.l.o.g. Φ(t 0 ) > t 0, since if t 0 = Φ(t 0 ) then (7) implies x 0 is a root of (4). Observe that (8) and Mean-Value Theorem imply Φ is nondecreasing and thus t k+1 t k, k. Also, if t is a root of (6), then t k < t: t 0 < t and t t k = Φ( t) Φ(t k 1 ) = t Φ (τ)dτ 0. So, t t k t where t k 1 is the smallest root of (4) in [t 0, t ], and, in particular, t k form a Cauchy sequence. Now, consider (4) together with (5). By (7) we have x 1 x 0 t 1 t 0 and x 1, x 0 {x : x x 0 r}. We show that a similar relationship 12

13 t 0 t 1 t t Figure 8: Sequential approximating sequence for t = Φ(t). holds for all k by induction. Assume x i {x : x x 0 r} and x i x i 1 t i t i 1, for i = 1,..., k 1. Consider x k x k 1 = S(x k ) S(x k 1 ) = xk 1 x k 2 S (ξ)dξ (where the integral is along a line segment [x k 2, x k 1 ]) = = 1 0 (S(x k 2 + τ(x k 1 x k 2 ))) τ dτ S (x k 2 + τ(x k 1 x k 2 ))(x k 1 x k 2 )dτ 0 Φ (t k 2 + τ(t k 1 t k 2 ))(t k 1 t k 2 )dτ (by (8) and observing x k 2 + τ(x k 1 x k 2 ) x 0 τ(x k 1 x k 2 ) + x k 2 x k 3 + x k 3 + x 0 τ(t k 2 t k 1 ) + t k 2 t k 3 + t 0 = τ(t k 2 t k 1 ) + t k 2 t 0 r = 1 0 by inductive hypothesis) (Φ(t k 2 + τ(t k 1 t k 2 ))) τ dτ = Φ(t k 1) Φ(t k 2 ) = t k t k 1, so indeed, x k x k 1 t k t k 1. In addition, x k x 0 = x k x k 1 + x k 1 x 0 t k t k 1 + t k 1 t 0 = t k t 0 < r. Finally, we show that x k form a Cauchy sequence that converges to 13

14 a root of (4): consider k 1, n 0, x k+n x k = x k+n x k+n 1 + x k+n 1 x k t n+k t k, so x k converges, say to x. By continuity of S, taking a limit in (5) we get so indeed, x is a root. x = S( x), A variant of Theorem 2.2 will be used applied to (3) to show the convergence of Newton s method with no a priori knowledge about the location of the root of (1). 2.2 Assignments show for a fixed k 1, (k-linear functions) f (k) form a vector space, verify that the induced norm is indeed a norm, i.e., satisfies norm axioms, for a square n n matrix A, viewed as a linear operator from R n to R n, x Ax, find the expression for induced norms where the norm used on R n corresponds to Euclidean norm, x 2 = i x2 i, l 1 -norm, x 1 = i x i, l -norm, x = max i x i, show that if f : R R is convex, monotone decreasing and has a root, then the Newton s method converges to this root. 2.3 Additional reading and references you may find the complete analysis and much more in the book by Kantorovich and Akilov, Functional Analysis 14

15 3 Lecture Kantorovich s analysis of Newton s method II Corollary 3.1. If the conditions of Theorem 2.2 are satisfied and, in addition, t [t 0, t ] is the unique stationary point for (6) with Φ( t) t, then the solution to (4) is also unique on {x : x x 0 r}. Proof. To be filled in. Next we apply the result of Theorem 2.2 to the analysis of Newton s method. In particular, we establish two existential-type results for the so-called modified Newton s method N 0(x) = x (f (x 0)) 1 f(x), and original Newton s method, N (x) = x (f (x)) 1 f(x); the results are existential in a sense that we still have to exhibit a majorizing function. We start by analyzing modified method first, and then use some part of the analysis to understand Newton s method itself. Remark 3.1. One may view modified Newton s method not only as an artificial ingredient introduced only to simply the analysis. In some situation the inversion of f is a very hard task by itself (in fact, as an example, the dominating work in the so-called interior-point methods in convex optimization falls precisely on inverting such a derivative), therefore it might be advantageous to do such an operation only once, if it is possible, at the cost of slower convergence as the main theorem of this section will illustrate. Together with equation (1) we consider Ψ(t) = 0 (9) where Ψ : R R is sufficiently smooth. Namely, as before, we assume f C 2 on {x : x x 0 r} and is defined on {x : x x 0 R}, R > r, and let Ψ be C 2 for t [t 0, t ], t = t 0 + r. In a way, the first result simply paraphrases Theorem 2.2 for the modified Newton s method, noting that x = f(x) is equivalent to x = N x. Theorem 3.1. If the following conditions hold true Γ 0 = f (x 0 ) 1, (10) c 0 = 1 > 0, Ψ (t 0) (11) Γ 0 f(x 0) c 0Ψ(t 0), (12) Γ 0 f (x) c o Ψ (t), for x x 0 t t 0 r, (13) equation (9) has a root in [t 0, t ], (14) then the modified Newton s method for (1) and (9), started at x 0 and t 0 respectively, converges to the roots x, t of both equations, and x x 0 t t 0, where t is the smallest root in the interval [t 0, t ]. 15

16 Proof. To be filled in. Analogously, we can obtain a convergence result for Newton s method itself as the following Theorem 3.2. Let f, Ψ satisfy the conditions of Theorem 3.1. Then the sequence of Newton s iterates N k (x 0 ) x where x is the root of (1). To both Theorems 3.1 and 3.2 we can easily establish a corollary regarding the uniqueness of the solution Corollary 3.2. If in addition to satisfying the conditions of Theorem 3.1 we have that Ψ(t ) 0 and t [t 0, t ] is the unique root of (9) in [t 0, t ], then the root x is the unique root of (1) in the ball {x : x x 0 r}. The corollary is easily established using Corollary 3.1 and observing that Ψ(t ) 0 is equivalent to Φ(t ) t. Remark 3.2. The conditions of Corollary 3.2 are trivially met if we let t = t, the smallest root of (9) in [t 0, t ], thus guaranteeing a unique solution in the ball {x : x x 0 t t 0 }. Finally, we are ready to establish the main result Theorem 3.3. Let, as before, f be defined on {x : x x 0 R} and be twice continuously differentiable on {x : x x 0 r}, r < R. In addition, assume If and Γ 0 = f (x 0 ) 1, (15) Γ 0 f(x 0 ) η, (16) Γ 0f (x) K, x {x : x x 0 r},. (17) h = Kη 1 2 r r 0 = 1 1 2h η, h then (1) has a solution x so that N k (x 0 ) x and N0 k (x 0 ) x as k, with x x 0 r 0. Moreover, if or h < 1/2, r < r 1 = h h h = 1/2, r r 1, then x is the unique solution in the ball of radius r around x 0. The speed of convergence of N k (x 0 ) is characterized by x N k (x 0) 1 η 2 k (2h)2k, k = 0, 1, 2,..., (18) h and for N k 0 (x 0 ) with h < 1/2 we can write η, x N k 0 (x 0 ) η h (1 1 2h) k+1, k = 0, 1, 2,

17 Proof. On [0, r] consider Ψ(t) = Kt 2 2t + 2η = Kt 2 2t + 2h K = h η t2 2t + 2η. Clearly, Ψ and f satisfy the conditions (10) (13) of Theorem 3.1; also, (14) is met, since the roots of Ψ(t) = 0 are and r 0 = 1 1 2h h r 1 = h η h are both real under the assumption h 1/2, while r r 0 guarantees the existence of the root in [0, r] with r 0 being the smallest root, and r < r 1 guarantees its uniqueness, implying the uniqueness of x in the ball around x 0 of radius r (by Corollary 3.2). Since t = r 0, x x 0 r 0 follows immediately. To establish the estimates bounding the speed of convergence, it is enough to consider only the sequences of corresponding values of t k that result from applying the original and the modified Newton s method to solving (9). (Note that any pair (x k, t k ) may be considered as an initial approximate ( x 0, t 0 ) to the roots ( x, t) in the sequential approximation scheme, and use Theorem 3.1 or 3.2.) First consider N k (x 0). Denote t k+1 = t k Ψ(t k) Ψ (t k, k = 0, 1,..., ) η c k = 1 Ψ (t k ), η k = c k Ψ(t k ) = t k+1 t k, K k = c k Ψ (t k ) = 2Kc k, h k = K k η k, and express this quantities in terms of the same quantities but with one index less. By Taylor s formula η k = c k Ψ(t k ) = c k Ψ(t k 1 + η k 1 ) ] Noting we get = c k [Ψ(t k 1 ) + Ψ (t k 1 )η k Ψ (t k 1 )ηk 1 2 [ ηk 1 = c k + η k ] c k 1 c k 1 2 Kk 1 c k 1 η2 k 1 = c k 2c k 1 K k 1 η 2 k 1. c k 1 = Ψ (t k ) c k Ψ (t k 1 ) = Ψ (t k 1 ) + Ψ (t k 1 )η k 1 Ψ (t k 1 ) = 1 K k 1 η k 1 = 1 h k 1 η k = 1 2 K k ηk 1 2 = η k 1 h k 1. 1 h k h k 1 17

18 Similarly, Now, from the above c k K k 2c k K = 2Kc k 1 c k 1 = K k 1. 1 h k 1 ( hk 1 ) 2. h k = K k η k = h k 1 From the last expressions for η k and h k, accounting for h k 1/2, we get and, consequently, together with η k h k 1 η k 1, h k 2 (2h k 1 ) 2, h k 1 2 (2h0)2k = 1 2 (2h)2k η k h k 1 η k 1 h k 1 h k 2 η k 2 h k 1 h 0 η k (2h)2k 1 η. Finally, according to t k+1 t k = η k, we find t t k = (t k+1 t k ) + (t k+2 t k+1 ) + = η k + η k ( 1 2 k (2h)2k η (2h)2k + 1 ) (2h)2k k (2h)2k 1 η = 1 2 k (2h)2k η h. And thus, we get the estimate (18). Considering the error estimate for the modified method, assume h < 1/2. Using the same Φ is in the proof of Theorem 3.1, we can write Assignments Check that the conditions of Theorem 3.1 in the proof of Theorem 3.3 are indeed satisfied. 3.3 Additional reading and references There seems to be an excellent (abridged) approach to Newton s method analysis by Ortega, found at JSTOR (chances are you would need to login with McMaster library credentials to get access to it). 18

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

MATH 202B - Problem Set 5

MATH 202B - Problem Set 5 MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there

More information

Kantorovich s Majorants Principle for Newton s Method

Kantorovich s Majorants Principle for Newton s Method Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

Numerical Analysis: Solving Nonlinear Equations

Numerical Analysis: Solving Nonlinear Equations Numerical Analysis: Solving Nonlinear Equations Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

In class midterm Exam - Answer key

In class midterm Exam - Answer key Fall 2013 In class midterm Exam - Answer key ARE211 Problem 1 (20 points). Metrics: Let B be the set of all sequences x = (x 1,x 2,...). Define d(x,y) = sup{ x i y i : i = 1,2,...}. a) Prove that d is

More information

Math 115 Spring 11 Written Homework 10 Solutions

Math 115 Spring 11 Written Homework 10 Solutions Math 5 Spring Written Homework 0 Solutions. For following its, state what indeterminate form the its are in and evaluate the its. (a) 3x 4x 4 x x 8 Solution: This is in indeterminate form 0. Algebraically,

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Newton Method Barnabás Póczos & Ryan Tibshirani Administrivia Scribing Projects HW1 solutions Feedback about lectures / solutions on blackboard 2 Books to read Boyd and Vandenberghe:

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Mathematical Analysis Outline. William G. Faris

Mathematical Analysis Outline. William G. Faris Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,

More information

2 Discrete Dynamical Systems (DDS)

2 Discrete Dynamical Systems (DDS) 2 Discrete Dynamical Systems (DDS) 2.1 Basics A Discrete Dynamical System (DDS) models the change (or dynamics) of single or multiple populations or quantities in which the change occurs deterministically

More information

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy, Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19 Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each

More information

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable. Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 13 Sequences and Series of Functions These notes are based on the notes A Teacher s Guide to Calculus by Dr. Louis Talman. The treatment of power series that we find in most of today s elementary

More information

Lecture2 The implicit function theorem. Bifurcations.

Lecture2 The implicit function theorem. Bifurcations. Lecture2 The implicit function theorem. Bifurcations. 1 Review:Newton s method. The existence theorem - the assumptions. Basins of attraction. 2 The implicit function theorem. 3 Bifurcations of iterations.

More information

Newton s Method and Linear Approximations

Newton s Method and Linear Approximations Newton s Method and Linear Approximations Curves are tricky. Lines aren t. Newton s Method and Linear Approximations Newton s Method for finding roots Goal: Where is f (x) = 0? f (x) = x 7 + 3x 3 + 7x

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Chapter 8. P-adic numbers. 8.1 Absolute values

Chapter 8. P-adic numbers. 8.1 Absolute values Chapter 8 P-adic numbers Literature: N. Koblitz, p-adic Numbers, p-adic Analysis, and Zeta-Functions, 2nd edition, Graduate Texts in Mathematics 58, Springer Verlag 1984, corrected 2nd printing 1996, Chap.

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

arxiv: v1 [math.na] 25 Sep 2012

arxiv: v1 [math.na] 25 Sep 2012 Kantorovich s Theorem on Newton s Method arxiv:1209.5704v1 [math.na] 25 Sep 2012 O. P. Ferreira B. F. Svaiter March 09, 2007 Abstract In this work we present a simplifyed proof of Kantorovich s Theorem

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Theorems. Theorem 1.11: Greatest-Lower-Bound Property. Theorem 1.20: The Archimedean property of. Theorem 1.21: -th Root of Real Numbers

Theorems. Theorem 1.11: Greatest-Lower-Bound Property. Theorem 1.20: The Archimedean property of. Theorem 1.21: -th Root of Real Numbers Page 1 Theorems Wednesday, May 9, 2018 12:53 AM Theorem 1.11: Greatest-Lower-Bound Property Suppose is an ordered set with the least-upper-bound property Suppose, and is bounded below be the set of lower

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Measures and Measure Spaces

Measures and Measure Spaces Chapter 2 Measures and Measure Spaces In summarizing the flaws of the Riemann integral we can focus on two main points: 1) Many nice functions are not Riemann integrable. 2) The Riemann integral does not

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

McGill University Math 354: Honors Analysis 3

McGill University Math 354: Honors Analysis 3 Practice problems McGill University Math 354: Honors Analysis 3 not for credit Problem 1. Determine whether the family of F = {f n } functions f n (x) = x n is uniformly equicontinuous. 1st Solution: The

More information

The Inverse Function Theorem via Newton s Method. Michael Taylor

The Inverse Function Theorem via Newton s Method. Michael Taylor The Inverse Function Theorem via Newton s Method Michael Taylor We aim to prove the following result, known as the inverse function theorem. Theorem 1. Let F be a C k map (k 1) from a neighborhood of p

More information

An improved convergence theorem for the Newton method under relaxed continuity assumptions

An improved convergence theorem for the Newton method under relaxed continuity assumptions An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,

More information

1 The Local-to-Global Lemma

1 The Local-to-Global Lemma Point-Set Topology Connectedness: Lecture 2 1 The Local-to-Global Lemma In the world of advanced mathematics, we are often interested in comparing the local properties of a space to its global properties.

More information

Solutions to Tutorial 7 (Week 8)

Solutions to Tutorial 7 (Week 8) The University of Sydney School of Mathematics and Statistics Solutions to Tutorial 7 (Week 8) MATH2962: Real and Complex Analysis (Advanced) Semester 1, 2017 Web Page: http://www.maths.usyd.edu.au/u/ug/im/math2962/

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume

More information

Newton s Method and Linear Approximations

Newton s Method and Linear Approximations Newton s Method and Linear Approximations Newton s Method for finding roots Goal: Where is f (x) =0? f (x) =x 7 +3x 3 +7x 2 1 2-1 -0.5 0.5-2 Newton s Method for finding roots Goal: Where is f (x) =0? f

More information

Chapter III. Unconstrained Univariate Optimization

Chapter III. Unconstrained Univariate Optimization 1 Chapter III Unconstrained Univariate Optimization Introduction Interval Elimination Methods Polynomial Approximation Methods Newton s Method Quasi-Newton Methods 1 INTRODUCTION 2 1 Introduction Univariate

More information

Problem Set 2: Solutions Math 201A: Fall 2016

Problem Set 2: Solutions Math 201A: Fall 2016 Problem Set 2: s Math 201A: Fall 2016 Problem 1. (a) Prove that a closed subset of a complete metric space is complete. (b) Prove that a closed subset of a compact metric space is compact. (c) Prove that

More information

Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains

Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains Constructive Theory of Functions Sozopol, June 9-15, 2013 F. Piazzon, joint work with M. Vianello Department of Mathematics.

More information

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO QUESTION BOOKLET EECS 227A Fall 2009 Midterm Tuesday, Ocotober 20, 11:10-12:30pm DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the midterm. The midterm consists

More information

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N Applied Analysis prelim July 15, 216, with solutions Solve 4 of the problems 1-5 and 2 of the problems 6-8. We will only grade the first 4 problems attempted from1-5 and the first 2 attempted from problems

More information

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim An idea how to solve some of the problems 5.2-2. (a) Does not converge: By multiplying across we get Hence 2k 2k 2 /2 k 2k2 k 2 /2 k 2 /2 2k 2k 2 /2 k. As the series diverges the same must hold for the

More information

EXAMPLES OF PROOFS BY INDUCTION

EXAMPLES OF PROOFS BY INDUCTION EXAMPLES OF PROOFS BY INDUCTION KEITH CONRAD 1. Introduction In this handout we illustrate proofs by induction from several areas of mathematics: linear algebra, polynomial algebra, and calculus. Becoming

More information

Existence and uniqueness of solutions for nonlinear ODEs

Existence and uniqueness of solutions for nonlinear ODEs Chapter 4 Existence and uniqueness of solutions for nonlinear ODEs In this chapter we consider the existence and uniqueness of solutions for the initial value problem for general nonlinear ODEs. Recall

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

converges as well if x < 1. 1 x n x n 1 1 = 2 a nx n

converges as well if x < 1. 1 x n x n 1 1 = 2 a nx n Solve the following 6 problems. 1. Prove that if series n=1 a nx n converges for all x such that x < 1, then the series n=1 a n xn 1 x converges as well if x < 1. n For x < 1, x n 0 as n, so there exists

More information

Whitney s Extension Problem for C m

Whitney s Extension Problem for C m Whitney s Extension Problem for C m by Charles Fefferman Department of Mathematics Princeton University Fine Hall Washington Road Princeton, New Jersey 08544 Email: cf@math.princeton.edu Supported by Grant

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Examples of metric spaces. Uniform Convergence

Examples of metric spaces. Uniform Convergence Location Kroghstræde 7, room 63. Main A T. Apostol, Mathematical Analysis, Addison-Wesley. BV M. Bökstedt and H. Vosegaard, Notes on point-set topology, electronically available at http://home.imf.au.dk/marcel/gentop/index.html.

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information

Newton s Method and Linear Approximations 10/19/2011

Newton s Method and Linear Approximations 10/19/2011 Newton s Method and Linear Approximations 10/19/2011 Curves are tricky. Lines aren t. Newton s Method and Linear Approximations 10/19/2011 Newton s Method Goal: Where is f (x) =0? f (x) =x 7 +3x 3 +7x

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

ON THE GAPS BETWEEN ZEROS OF TRIGONOMETRIC POLYNOMIALS

ON THE GAPS BETWEEN ZEROS OF TRIGONOMETRIC POLYNOMIALS Real Analysis Exchange Vol. 8(), 00/003, pp. 447 454 Gady Kozma, Faculty of Mathematics and Computer Science, Weizmann Institute of Science, POB 6, Rehovot 76100, Israel. e-mail: gadyk@wisdom.weizmann.ac.il,

More information

Section 3.1 Quadratic Functions

Section 3.1 Quadratic Functions Chapter 3 Lecture Notes Page 1 of 72 Section 3.1 Quadratic Functions Objectives: Compare two different forms of writing a quadratic function Find the equation of a quadratic function (given points) Application

More information

COMPUTER ARITHMETIC. 13/05/2010 cryptography - math background pp. 1 / 162

COMPUTER ARITHMETIC. 13/05/2010 cryptography - math background pp. 1 / 162 COMPUTER ARITHMETIC 13/05/2010 cryptography - math background pp. 1 / 162 RECALL OF COMPUTER ARITHMETIC computers implement some types of arithmetic for instance, addition, subtratction, multiplication

More information

CHAPTER 10 Zeros of Functions

CHAPTER 10 Zeros of Functions CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems

More information

Lecture 6: Finite Fields

Lecture 6: Finite Fields CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going

More information

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution 1 Introduction Lecture 7 Root finding I For our present purposes, root finding is the process of finding a real value of x which solves the equation f (x)=0. Since the equation g x =h x can be rewritten

More information

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3 Analysis Math Notes Study Guide Real Analysis Contents Ordered Fields 2 Ordered sets and fields 2 Construction of the Reals 1: Dedekind Cuts 2 Metric Spaces 3 Metric Spaces 3 Definitions 4 Separability

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Iowa State University. Instructor: Alex Roitershtein Summer Exam #1. Solutions. x u = 2 x v

Iowa State University. Instructor: Alex Roitershtein Summer Exam #1. Solutions. x u = 2 x v Math 501 Iowa State University Introduction to Real Analysis Department of Mathematics Instructor: Alex Roitershtein Summer 015 Exam #1 Solutions This is a take-home examination. The exam includes 8 questions.

More information

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique Short note on compact operators - Monday 24 th March, 2014 Sylvester Eriksson-Bique 1 Introduction In this note I will give a short outline about the structure theory of compact operators. I restrict attention

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Problem Solving in Math (Math 43900) Fall 2013

Problem Solving in Math (Math 43900) Fall 2013 Problem Solving in Math (Math 43900) Fall 203 Week six (October ) problems recurrences Instructor: David Galvin Definition of a recurrence relation We met recurrences in the induction hand-out. Sometimes

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Principle of Mathematical Induction

Principle of Mathematical Induction Advanced Calculus I. Math 451, Fall 2016, Prof. Vershynin Principle of Mathematical Induction 1. Prove that 1 + 2 + + n = 1 n(n + 1) for all n N. 2 2. Prove that 1 2 + 2 2 + + n 2 = 1 n(n + 1)(2n + 1)

More information

Lec7p1, ORF363/COS323

Lec7p1, ORF363/COS323 Lec7 Page 1 Lec7p1, ORF363/COS323 This lecture: One-dimensional line search (root finding and minimization) Bisection Newton's method Secant method Introduction to rates of convergence Instructor: Amir

More information

Immerse Metric Space Homework

Immerse Metric Space Homework Immerse Metric Space Homework (Exercises -2). In R n, define d(x, y) = x y +... + x n y n. Show that d is a metric that induces the usual topology. Sketch the basis elements when n = 2. Solution: Steps

More information