Calculus of Variations Summer Term 2014 Lecture 12 26. Juni 2014 c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 1 / 25
Purpose of Lesson Purpose of Lesson: To discuss numerical solutions of the variational problems To introduce Euler s Finite Difference Method and Ritz s Method. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 2 / 25
9. Numerical Solutions c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 3 / 25
Numerical Solutions: The Euler-Lagrange equations may be hard to solve. Natural response is to find numerical methods. 1 Numerical solution of the Euler-Lagrange equations We won t consider these here (see other courses) 2 Euler s finite difference method 3 Ritz (Rayleigh-Ritz) In 2D: Kantorovich s method c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 4 / 25
Numerical approximation of integrals: Numerical Approximation use an arbitrary set of mesh points a = x 0 < x 1 < x 2 < < x n = b. approximate y (x i ) = y i+1 y i x i+1 x i = y i x i rectangle rule J[y] = b a n 1 F(x, y, y )dx F i=0 Ĵ[ ] is a function of the vector y = (y 1, y 2,..., y n ). ( x i, y i, y ) i x i = x Ĵ[y] i c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 6 / 25
Finite Difference Method (FDM) Treat this as a maximization of a function of n variables, so that we require J y i = 0 for all i = 1, 2,..., n. Typically use uniform grid so x i = x = b a n. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 7 / 25
Simple Example Example 12.1 Find extremals for J[y] = with y(0) = 0 and y(1) = 0. 1 0 [ 1 2 y 2 + 1 ] 2 y 2 y dx The Euler-Lagrange equation y y = 1. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 8 / 25
Simple Example: direct solution Example 12.1 (direct solution) E-L equation: y y = 1 Solution to homogeneous equation y y = 0 is given by e λx giving characteristic equation so λ = ±1 Particular solution y = 1. λ 2 1 = 0, Final solution is y(x) = Ae x + Be x + 1. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 9 / 25
Simple Example: direct solution Example 12.1 (direct solution) The boundary conditions y(0) = y(1) = 0 constrain A + B = 1 Ae + Be 1 = 1 so A = 1 e e 2 1 and B = e e2 e 2 1. Then the exact solution to the extremal problem is y(x) = 1 e e 2 1 ex + e e2 e 2 1 e x + 1. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 10 / 25
Simple Example: Euler s FDM Example 12.1 (Euler s FDM) Find extremals for Euler s FDM: J[y] = 1 0 [ 1 2 y 2 + 1 ] 2 y 2 y dx Take the grid x i = i/n, for i = 0, 1,..., n so So end points y 0 = 0 and y n = 0 x = 1/n and y i = y i+1 y i. y i = y i / x = n(y i+1 y i ) and y 2 i = n 2 ( y 2 i 2y i y i+1 + y 2 i+1). c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 11 / 25
Simple Example: Euler s FDM Example 12.1 (Euler s FDM) Find extremals for J[y] = Its FDM approximation is 1 n 1 J[y] = F (x i, y i, y i )dx = = i=0 n 1 i=0 n 1 i=0 0 [ 1 2 y 2 + 1 ] 2 y 2 y dx 1 ( ) ( ) 2 n2 yi 2 2y i y i+1 + yi+1 2 x + yi 2 /2 y i x 1 ( ) 2 n yi 2 2y i y i+1 + yi+1 2 + y 2 i /2 y i n. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 12 / 25
Simple Example: end-conditions Example 12.1 (end-conditions) We know the end conditions y(0) = y(1) = 0, which imply that y 0 = y n = 0. Include them into the objective using Lagrange multipliers H[y] = n 1 i=0 1 ( ) 2 n yi 2 2y i y i+1 + yi+1 2 + y i 2 /2 y i n + λ 0 y 0 + λ n y n. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 13 / 25
Simple Example: Euler s FDM Example 12.1 (Euler s FDM) Taking derivatives, note that y i only appears in two terms of the FDM approximation H[y] = H[y] y i = n 1 i=0 1 ( ) 2 n yi 2 2y i y i+1 + yi+1 2 + y i 2 /2 y i n + λ 0 y 0 + λ n y n n(y 0 y 1 ) + y 0 1 + λ 0 for i = 0 n n(2y i y i+1 y i 1 ) + y i /n 1/n for i = 1,..., n 1 n(y n y n 1 ) + λ n for i = n We need to set the derivatives to all be zero, so we now have n = 3 linear equations, including y 0 = y n = 0, and n + 3 variables including the two Lagrange multipliers. We can solve this system numerically using, e.g., Maple. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 14 / 25
Simple Example: Euler s FDM Example 12.1 (Euler s FDM) Example: n = 4, solve where A = Az = b 1.00 4.25 4.00 1 4.00 8.25 4.00 4.00 8.25 4.00 4.00 8.25 4.00 4.00 4.00 1.00 1.00 and b = (0.00, 0.25, 0.25, 0.25, 0.25, 0.00, 0.00) T c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 15 / 25
Simple Example: Euler s FDM Example 12.1 (Euler s FDM) First n + 1 terms of z give y Last two terms of z give the Lagrange multipliers λ 0 and λ n. Solving the system we get for n = 4 y 1 = y 3 = 0.08492201040, y 2 = 0.1126516464 c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 16 / 25
Simple Example: results Example 12.1 (results) c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 17 / 25
Simple Example: results Example 12.1 (results) c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 18 / 25
Convergence of Euler s FDM Convergence of Euler s FDM n 1 Ĵ[y] = F i=0 ( x i, y i, y i x i Only two terms in the sum involve y i, so Ĵ y i = ( F y i = 1 F x y + F y i x i 1, y i 1, y i 1 x ( x i 1, y i 1, y i 1 x i ( x i, y i, y i x = F F (x i, y i, y i y ) y i i ) 1 x ( ) x and y i = y i+1 y i ) + F y ) i ( x i, y i, y ) i x ( F y i x i, y i, y i x ( x i, y i, y i x ) F y i x ) x i 1, y i 1, y i 1 x ) c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 19 / 25
Convergence of Euler s FDM Convergence of Euler s FDM Ĵ = F (x i, y i, y y i y i ( F i ) y i x i, y i, y i x ) ( F y i x x i 1, y i 1, y i 1 x ) = 0. In limit n, then x 0, and so we get F y d ( ) F dx y = 0 which are the Euler-Lagrange equations. i.e., the finite difference solution converges to the solution of the Euler-Lagrange equations. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 20 / 25
Comments Remarks There are lots of ways to improve Euler s FDM use a better method of numerical quadrature (integration) trapezoidal rule Simpson s rule Romberg s method use a non-uniform grid make it finer where there is more variation We can use a different approach that can be even better. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 21 / 25
Ritz s Method Ritz s Method In Ritz s method (called Kantorovich s method where there is more than one independent variable), we approximate our functions (the extremal in particular) using a family of simple functions. Again we can reduce the problem into a standard multivariable maximization problem, but now we seek coefficients for our approximation. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 22 / 25
Ritz s Method Assume we can approximate y(x) by y(x) = φ 0 (x) + c 1 φ 1 (x) + c 2 φ 2 (x) + + c n φ n (x) where we choose a convenient set of functions φ j (x) and find the values of c j which produce an extremal. For fixed end-points problem: Choose φ 0 (x) to satisfy the end conditions. Then φ j (x 0 ) = φ j (x 1 ) = 0 for j = 1, 2,..., n The φ can be chosen from standard sets of functions, e.g. power series, trigonometric functions, Bessel s functions, etc. (but must be linearly independent). c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 23 / 25
Ritz s Method Select { φ j } n j=0 Approximate y n (x) = φ 0 (x) + c 1 φ 1 (x) + c 2 φ 2 (x) + + c n φ n (x) Approximate J[y] J[y n ] = F(x, y n, y n)dx. x 0 x 1 Integrate to get J[y n ] = J n (c 1, c 2,..., c n ). J n is a known function of n variables, so we can maximize (or minimize) it as usual by J n c i = 0 for all i = 1, 2,..., n. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 24 / 25
Upper Bounds Assume the extremal of interest is a minimum, then for the extremal J[y] < J[ŷ] for all ŷ within the neighborhood of y. Assume our approximating function y n is close enough to be in that neighborhood, then J[y] J[y n ] = J n [c] so the approximation provides an upper bound on the minimum J[y]. Another way to think about it is that we optimize on a smaller set of possible functions y, so we can t get quite as good a minimum. c Daria Apushkinskaya 2014 () Calculus of variations lecture 12 26. Juni 2014 25 / 25