The Calculus of Variations

Similar documents
Completeness, Eigenvalues, and the Calculus of Variations

Ideas from Vector Calculus Kurt Bryan

2t t dt.. So the distance is (t2 +6) 3/2

Chapter 1 Review of Equations and Inequalities

Understanding Exponents Eric Rasmusen September 18, 2018

MATH 255 Applied Honors Calculus III Winter Homework 5 Solutions

Induction 1 = 1(1+1) = 2(2+1) = 3(3+1) 2

Solving with Absolute Value

How to Use Calculus Like a Physicist

Math 112 Section 10 Lecture notes, 1/7/04

1 Lesson 13: Methods of Integration

WEEK 7 NOTES AND EXERCISES

Volume in n Dimensions

Alex s Guide to Word Problems and Linear Equations Following Glencoe Algebra 1

We can see that f(2) is undefined. (Plugging x = 2 into the function results in a 0 in the denominator)

Notes: Piecewise Functions

Solution to Proof Questions from September 1st

5.5. The Substitution Rule

To get horizontal and slant asymptotes algebraically we need to know about end behaviour for rational functions.

Notes 3.2: Properties of Limits

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations

Math 1320, Section 10 Quiz IV Solutions 20 Points

Finding Limits Graphically and Numerically

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Lesson 21 Not So Dramatic Quadratics

Worksheet Week 1 Review of Chapter 5, from Definition of integral to Substitution method

19. TAYLOR SERIES AND TECHNIQUES

CH 59 SQUARE ROOTS. Every positive number has two square roots. Ch 59 Square Roots. Introduction

Complex Numbers: A Brief Introduction. By: Neal Dempsey. History of Mathematics. Prof. Jennifer McCarthy. July 16, 2010

The definite integral gives the area under the curve. Simplest use of FTC1: derivative of integral is original function.

2.1 Definition. Let n be a positive integer. An n-dimensional vector is an ordered list of n real numbers.

Review for Final Exam, MATH , Fall 2010

Differential Forms. Introduction

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

DIFFERENTIAL EQUATIONS

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Continuity and One-Sided Limits

Infinite Limits. Infinite Limits. Infinite Limits. Previously, we discussed the limits of rational functions with the indeterminate form 0/0.

Introduction to Algebra: The First Week

= lim. (1 + h) 1 = lim. = lim. = lim = 1 2. lim

Getting Started with Communications Engineering

Lecture 12: Quality Control I: Control of Location

MATH 1130 Exam 1 Review Sheet

Leaving Cert Differentiation

Sign of derivative test: when does a function decreases or increases:

Algebra Exam. Solutions and Grading Guide

Numerical Methods for Inverse Kinematics

O.K. But what if the chicken didn t have access to a teleporter.

Algebra & Trig Review

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

Tommo So Vowel Harmony: The Math

Lecture 11: Extrema. Nathan Pflueger. 2 October 2013

base 2 4 The EXPONENT tells you how many times to write the base as a factor. Evaluate the following expressions in standard notation.

The Mean Value Theorem

Line Integrals and Path Independence

TAYLOR POLYNOMIALS DARYL DEFORD

Parametric Equations, Function Composition and the Chain Rule: A Worksheet

Math Real Analysis II

Sec. 1 Simplifying Rational Expressions: +

Chapter 9: Roots and Irrational Numbers

Lesson 11-1: Parabolas

Note that we are looking at the true mean, μ, not y. The problem for us is that we need to find the endpoints of our interval (a, b).

Generating Function Notes , Fall 2005, Prof. Peter Shor

1.4 Techniques of Integration

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

CHAPTER 6 VECTOR CALCULUS. We ve spent a lot of time so far just looking at all the different ways you can graph

Answers for Calculus Review (Extrema and Concavity)

Solving Quadratic & Higher Degree Inequalities

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Math Lecture 4 Limit Laws

Introduction. So, why did I even bother to write this?

9.4 Radical Expressions

Sequences and Series

Numerical method for approximating the solution of an IVP. Euler Algorithm (the simplest approximation method)

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

6x 2 8x + 5 ) = 12x 8

Ch. 11 Solving Quadratic & Higher Degree Inequalities

Practice Calculus Test without Trig

The Path Integral Formulation of Quantum Mechanics

Main topics for the First Midterm Exam

Lecture 7: Differential Equations

Where Is Newton Taking Us? And How Fast?

Numerical Solutions to PDE s

6x 2 8x + 5 ) = 12x 8. f (x) ) = d (12x 8) = 12

First Derivative Test

SYDE 112, LECTURE 7: Integration by Parts

A Few Examples of Limit Proofs

Problem Solving. Kurt Bryan. Here s an amusing little problem I came across one day last summer.

Please bring the task to your first physics lesson and hand it to the teacher.

Math (P)Review Part II:

Math 291-1: Lecture Notes Northwestern University, Fall 2015

1 Quadratic Functions

In economics, the amount of a good x demanded is a function of the price of that good. In other words,

Complex Numbers Year 12 QLD Maths C

STARTING WITH CONFIDENCE

DIFFERENTIATION AND INTEGRATION PART 1. Mr C s IB Standard Notes

Last week we looked at limits generally, and at finding limits using substitution.

Mathematics for Intelligent Systems Lecture 5 Homework Solutions

Calculus Essentials Ebook

7.3 Hyperbolic Functions Hyperbolic functions are similar to trigonometric functions, and have the following

Transcription:

Image Reconstruction Review The Calculus of Variations MA 348 Kurt Bryan Recall the image restoration problem we considered a couple weeks ago. An image in 1D can be considered as a function a(t) defined on some interval, say t 1. We sample a(t) at points t i = i/n for i = 1 to n, to obtain a discretized image a i = a(t i ). But a i is corrupted by noise, so we really measure y i = a i + ϵ i where ϵ i is some kind of random noise. We attempted to recover the actual image by minimizing f(x) = 1 n i y i ) 2 i=1(x 2 + α 2 n 1 i=1 ( ) xi+1 x 2 i (1) where h = 1/n and α is a parameter at our disposal. The minimizer x is the reconstructed image. The idea was that the first sum on the right in equation (1) encourages x i y i, while the second sum is a term to penalize rapid variation in the reconstruction. The scheme had the disadvantage of blurring crisp edges in an image, so we considered by alternate approach x i+1 x i h. of using a penalty term of the form i Consider multiplying the right side of equation (1) by h and then letting n approach infinity. If you think of x i = x(t i ) and y i = y(t i ) for some functions x(t) and y(t), then the right side approaches limit 1 2 (x(t) y(t))2 dt + α 2 (x (t)) 2 dt, at least if x(t) and y(t) are nice enough functions. This is the continuous version of the image restoration problem: given a function y(t), find a function x(t) that minimizes the quantity h J(x) = 1 2 (x(t) y(t)) 2 dt + α 2 (x (t)) 2 dt. (2) The Calculus of Variations The calculus of variations is about finding functions that minimize integrals. This might sound like a useless problem, but it s actually one of the most important areas of classical and modern applied math. As you can see above, the image restoration problem can be cast as a calculus of variations problem. The solutions to most PDE s can also be found by a minimization process, and this leads to finite element methods. This is also the basis of much of optimal control theory, and the calculus of variations is part of the mathematics behind the Lagrangian and Hamiltonian formulations of the laws of physics. To illustrate the general idea, let s consider a simple model problem. We want to find a function which minimizes the integral Q(ϕ) = 1 (ϕ (x)) 2 dx (3)

with the additional restriction that ϕ() = 2 and ϕ(1) = 4. How do we go about doing this? Pretend that you re a junior in high school again; you ve studied precalculus mathematics, so you know what functions are, but you ve never seen a derivative. Someone asks you to find a number x which minimizes the function f(x) = x 4 3x + 7. You know what that means: the minimizer x will be a number for which the functional value is lower than that for any nearby number. In other words f(x ) f(x + ϵ) for any ϵ (if we require f(x ) f(x + ϵ) only for sufficiently small ϵ we get a LOCAL minimum). If you expand the above inequality out and do some simple algebra you obtain (4x 3 3)ϵ + 6x 2 ϵ 2 + 4x ϵ 3 + ϵ 4 (4) where the powers of ϵ have been grouped together. Think of x as fixed and consider the left side of inequality (4) as a function of ϵ. How can the left side be positive for ALL ϵ? If ϵ is sufficiently close to zero then the ϵ 2 and higher powers are negligible compared to ϵ, and so the sign of the left side of inequality (4) will be determined by the coefficient of ϵ. Now if 4x 3 3 > then choosing ϵ < and small will violate the inequality (4). If 4x 3 3 < then choosing ϵ > and small will again violate the inequality (4). The only way that (4) can hold for all small ϵ is if 4x 3 3 =, so that the ϵ 2 term dominates. But if 4x 3 3 = then we can solve to find x = 3 3/4 (this is the only real root). This is a necessary condition for x to be a local minimum (but not sufficient). To see if this is sufficient, you d need to look at the ϵ 2 term. The same procedure allows us to find the minimizer of Q(ϕ) defined in equation (3). Let s use f to denote that function which minimizes Q. Just as we perturbed x above by adding in a small number ϵ, we will perturb the minimizer f by adding in a small function. Let η(x) be any nice differentiable function with η() = η(1) =. Then if f really is the minimizer of Q (with the right boundary conditions) then Q(f + ϵη) Q(f) (5) for ANY small number ϵ, and ANY function η with η() = η(1) =. Why must we require that η() = η(1) =? Because otherwise the function f(x) + ϵη(x) doesn t satisfy the boundary conditions and so wouldn t be a legitimate contender for the minimizer. Expand out both sides of (5) and do a bit of algebra to find that 2ϵ f (x)η (x) dx + ϵ 2 (η (x)) 2 dx. (6) As before, the only way equation (6) can hold for all ϵ is if the coefficient of ϵ is zero. We conclude that f (x)η (x) dx =. (7) 2

Now integrate equation (7) by parts with u = f and dv = η dx. conditions on η we obtain With the boundary f (x)η(x) dx =. (8) Now remember, η was ANY differentiable function with η() = η(1) =. How can equation (8) be true for any such η? You should be able to convince yourself that equation (8) can hold for all η ONLY IF f (x) = on the interval (, 1). But this means that f(x) is of the form f(x) = c 1 x + c 2, and if it s to satisfy f() = 2 and f(1) = 4 then f must be f(x) = 2x + 2. This function is the minimizer of Q. Let s look at a few more examples: Example 2: Consider minimizing Q(ϕ) = ((ϕ (x)) 2 + ϕ 2 (x)) dx with the conditions that ϕ() = 1, but with no other restrictions on ϕ. The minimizer f will satisfy Q(f) Q(f + ϵη) for all C 1 functions η with η() = and real numbers ϵ. Note here that because we have no restrictions at the endpoint x = 1, there are no restrictions on η(1). If you expand out the above equation you find that Reasoning as before, this can hold only if ϵ(2 (f(x)η(x) + f (x)η (x)) dx) + O(ϵ 2 ). (f(x)η(x) + f (x)η (x)) dx =. Integrate the second term by parts and use the fact that η() = to find that (f(x) f (x))η(x) dx + f (1)η(1) = for all functions η with η() =. Now reason as before: if f(x) f (x) for some x in (, 1) then we can certainly find a function η (and can arrange η(1) =, too) so that the integral above is not zero. We conclude that f f is identically zero on the interval (, 1). If this is true then the above equation becomes f (1)η(1) =, and since we can arrange for η(1) to be anything we like, we must conclude that f (1) =. So in the end the minimizer f must satisfy f(x) f (x) = 3

with f() = 1 and f (1) =. The general solution to f f = is f(x) = c 1 e x + c 2 e x. To arrange the boundary conditions we solve two equations in two unknowns, c 1 + c 2 = 1, c 1 e c 2 /e = to find c 1 = 1/(e 2 + 1), c 2 = e 2 /(e 2 + 1). The minimizer is f(x) = 1 ( e x + e 2 x). e 2 + 1 Example 3: Let s prove that the shortest path between two points is a straight line. Let the points be (x, y ) and (x 1, y 1 ). Let s also make the assumption that the path can be described as the graph of a function y = f(x). Then what we seek to minimize is Q(ϕ) = x1 x 1 + (ϕ (x)) 2 dx subject to the restrictions that ϕ(x ) = y and ϕ(x 1 ) = y 1. As before, we know that the shortest path f will satisfy Q(f) Q(f + ϵη) where η(x ) = and η(x 1 ) =, and ϵ is any number. Written out in detail this is just x1 ( 1 + (f (x)) 2 + 2ϵf (x)η (x) + ϵ 2 (η (x)) 2 1 + (f (x)) 2 ) dx. (9) x Looks like a mess. We can simplify by using the fact that if ϵ is a small number then a + ϵb = a + b 2 a ϵ + O(ϵ2 ). This comes from the tangent line or Taylor series approximation. Apply this to the integrand in inequality (9) with a = 1 + (f (x)) 2 and b = 2f η + ϵ(η ) 2. You find that the first messy square root is f η 1 + (f (x)) 2 + 2ϵf (x)η (x) + ϵ 2 (η (x)) 2 = 1 + (f (x)) 2 + ϵ1 + (f ) + 2 O(ϵ2 ). Put this into the integral in (9), do the obvious cancellations, and note that as before we need the ϵ term to be zero if the inequality is to hold up. This gives x1 x f (x)η (x) 1 + (f (x)) 2 dx = for all functions η with η(x ) = η(x 1 ) =. Integrate by parts in x to get the derivative terms off of η. It s messy, but in the end you get (using the endpoint conditions on η) x1 x f (x) η(x) dx =. (1 + (f (x)) 2 ) 3/2 f (x) (1+(f (x)) 2 ) 3/2 As before, for this to be true we need =. Now, the denominator is always positive, so for this to be true we need f (x) =, i.e., f(x) = mx + b, a line. Of course, m 4

and b are adjusted to make it go through the two points (x, y ) and (x 1, y 1 ). General Principles: The previous examples lead us to some general principles for solving calculus of variations problems. I don t want to give a very specific recipe, just some general guidelines. To minimize Qϕ). 1. Start with the inequality Q(f) Q(f + ϵη) where f is the minimizer to be found and η is a legitimate test function to put into Q. 2. One way or another, manipulate the inequality above into something of the form ϵv 1 + O(ϵ 2 ) where V 1 is typically an integral involving f and η. The quantity V 1 is called the first variation of Q, and plays the role similar to that of a first derivative. For the above inequality to hold for all ϵ we need V 1 =. 3. Manipulate V 1, often using integration by parts, to determine f. This usually leads to an equation like L(f)η dx = where L(f) is some kind of differential equation (ordinary or partial) satisfied by f. The only way this integral can be zero is for L(f) =. Find f by solving the DE. One thing to note: in each case above we found a critical point, that is, a function which satisfies a necessary condition for a minimum, but that s not the same as sufficient. Just as in Calculus I, when you find a critical point you have to do some additional analysis to see if the tentative critical point is really a minimum. There are such tests in the calculus of variations (something like a second derivative test), but I won t go into them here. Problem The image restoration problem is often cast as finding a function x(t) which minimizes J(x) = 1 2 (x(t) y(t)) 2 dt + α 2 ϕ(x (t)) dt where the user chooses ϕ. One might take ϕ(u) = u 2, leading to the edge blurring scheme. Or perhaps ϕ(u) = u, which preserves edges but is not differentiable (so we have a slight problem). We can also try ϕ(u) = u 2 + δ for some small δ, to get a differentiable ϕ that approximates the absolute value. Work out the second order DE that the minimizer for J(x) above must satisfy; the DE involves ϕ and its derivatives. Write out the specific DE obtained when ϕ(u) = u 2 and when ϕ(u) = u 2 + δ. You can assume boundary conditions x () = x (1) =. 5

Then construct a piecewise constant function y(t) and solve the DE for each ϕ (numerically, in Maple probably) and compare the reconstruction to the original y. 6