Shooting methods for numerical solutions of control problems constrained. by linear and nonlinear hyperbolic partial differential equations

Size: px
Start display at page:

Download "Shooting methods for numerical solutions of control problems constrained. by linear and nonlinear hyperbolic partial differential equations"

Transcription

1 Shooting methods for numerical solutions of control problems constrained by linear and nonlinear hyperbolic partial differential equations by Sung-Dae Yang A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Applied Mathematics Program of Study Committee: L. Steven Hou, Major Professor Oleg Emanouvilov Hailiang Liu Michael Smiley Sung-Yell Song Iowa State University Ames, Iowa 24 Copyright c Sung-Dae Yang, 24. All rights reserved.

2 ii Graduate College Iowa State University This is to certify that the doctoral dissertation of Sung-Dae Yang has met the dissertation requirements of Iowa State University Committee Member Committee Member Committee Member Committee Member Major Professor For the Major Program

3 iii TABLE OF CONTENTS ABSTRACT v INTRODUCTION Shooting Methods for Numerical Solutions of Distributed Optimal Control Problems Constrained by Linear and Nonlinear Wave Equations Distributed optimal control problems for the wave equations The solution of the eact controllability problem as the limit of optimal control solutions Shooting methods for D control problems Algorithm for D control problems D Computational results Shooting methods in 2D control problems Algorithm for 2D control problems D computational results Shooting Methods for Numerical Solutions of Eact Controllability Problems Constrained by Linear and Nonlinear Wave Equations with Local Distributed Controls Eact controllability problems for the wave equations The solution of the eact local controllability problem as the limit of optimal control solutions

4 iv 3.3 Computational results Eact controllability problems with linear cases Eact controllability problems with nonlinear cases Shooting Methods for Numerical Solutions of Eact Boundary Controllability Problems for the -D Wave Equation Introduction The solution of the eact controllability problem as the limit of optimal control solutions An optimality system of equations and a continuous shooting method The discrete shooting method Computational eperiments for controllability of the linear wave equation An eample with known smooth eact solution Generic eamples with minimum L 2 (Σ)-norm boundary control Computational eperiments for controllability of semilinear wave equations7 5 Shooting Methods for Numerical Solutions of Distributed Optimal Control Problems Constrained by First Order Linear Hyperbolic Equation Distributed optimal control problems for first order linear hyperbolic equation An optimality system of equations Computational results CONCLUSION BIBLIOGRAPHY ACKNOWLEDGMENTS

5 v ABSTRACT We consider shooting methods for computing approimate solutions of control problems constrained by linear or nonlinear hyperbolic partial differential equations. Optimal control problems and eact controllability problems are both studied, with the latter being approimated by the former with appropriate choices of parameters in the cost functional. The types of equations include linear wave equations, semilinear wave equations, and first order linear hyperbolic equations. The controls considered are either distributed in part of the time-space domain or of the Dirichlet type on the boundary. Each optimal control problem is reformulated as a system of equations that consists of an initial value problem (IVP) for the state equations and a terminal value problem for the adjoint equations. The optimality systems are regarded as a system of an IVP for the state equation and an IVP for the adjoint equations with unknown initial conditions. Then the optimality system is solved by shooting methods, i.e. we attempt to find adjoint initial values such that the adjoint terminal conditions are met. The shooting methods are implemented iteratively and Newton s method is employed to update the adjoint initial values. The convergence of the algorithms are theoretically discussed and numerically verified. Computational eperiments are performed etensively for a variety of settings: different types of constraint equations in -D or 2-D, distributed or boundary controls, optimal control or eact controllability.

6 INTRODUCTION In this thesis we study numerical solutions of optimal control problems and eact controllability problems for linear and semilinear hyperbolic partial differential equations defined over the time interval [, T ] [, ) and on a bounded, C 2 (or conve) spatial domain Ω R d, d = or 2 or 3. The optimal control problems for the controls being either distributed in part of the time-space domain or of the Dirichlet type on the boundary are reformulated as a system of equations (an optimality system) that consists of an initial value problem for the underlying (linear or semilinear) hyperbolic partial differential equations and a terminal value problem for the adjoint hyperbolic partial differential equations by applying Lagrange multipliers. We develop shooting algorithms to solve the optimality system as follows : The optimality systems are regarded as a system of an IVP for the state equation and an IVP for the adjoint equations with unknown initial conditions. Then the optimality system is solved by shooting methods, i.e. we attempt to find adjoint initial values such that the adjoint terminal conditions are met. The shooting methods are implemented iteratively and Newton s method is employed to update the adjoint initial values. Let target functions W L 2 (Ω), Z L 2 (Ω) or H (Ω), U L 2 ((, T ) Ω) and an initial condition w L 2 (Ω), z L 2 (Ω) or H (Ω) be given. Let f L 2 ((, T ) Ω) denote the distributed control and g [L 2 (, T )] 2 denote the boundary control. We wish to find a control f or g that drives the states u and u t to W, Z at time T and u to U in (, T ) Ω. In Chapter 2 we consider an optimal control approach with distributed controls

7 2 defined on spatial domain Ω for solving the eact controllability problems for one and two-dimensional linear and semilinear wave equations defined on a time interval (, T ) and a bounded spatial domain Ω. Precisely we consider the following optimal control problem: minimize the cost functional J (u, f) = α 2 T + 2 T K(u) d dt + β Ω 2 f 2 d dt Ω Ω Φ (u(t, )) d + γ 2 Ω Φ 2 (u t (T, )) d (.) (where α, β, γ are positive constants) subject to the wave equation u tt u + Ψ(u) = f in (, T ) Ω, (.2) with the homogeneous boundary condition u Ω =, (t, ) (, T ) Ω. (.3) and the initial conditions u(, ) = w(), u t (, ) = z() Ω. (.4) We first develop the shooting algorithms for D and 2D distributed control problems, and then simulate with known smooth solution and generic eamples both linear and semilinear cases in D and 2D. In Chapter 3 we consider an optimal control approach with local distributed controls defined on spatial subdomain Ω ( Ω) for solving the eact controllability problems for one-dimensional linear and semilinear wave equations defined on a time interval (, T ) and a bounded spatial domain Ω. Precisely we consider the following optimal control

8 3 problem: minimize the cost functional J (u, f) = α 2 = α 2 T + 2 T T + 2 K(u) d dt + β Φ (u(t, )) d + γ Ω 2 Ω 2 f 2 d dt Ω T K(u) d dt + β Ω 2 χ Ω f 2 d dt Ω Ω Φ (u(t, )) d + γ 2 Ω Ω Φ 2 (u t (T, )) d Φ 2 (u t (T, )) d (.5) subject to the wave equation u tt u + Ψ(u) = χ Ω f in (, T ) Ω, (.6) with the homogeneous boundary condition u Ω =, (t, ) (, T ) Ω. (.7) and the initial conditions u(, ) = w(), u t (, ) = z() Ω. (.8) The shooting algorithms are applied to eact controllability problems with local distributed controls in the eamples of known smooth solution and generic eamples for both linear and semilinear cases. In Chapter 4 we consider an optimal boundary control approach for solving the eact boundary control problem for one-dimensional linear or semilinear wave equations defined on a time interval (, T ) and spatial interval (, X). The eact boundary control problem we consider is to seek a boundary control g = (g L, g R ) L 2 (, T) [L 2 (, T )] 2

9 4 and a corresponding state u such that the following system of equations hold: u tt u + f(u) = V in Q (, T ) (, X), u t= = u and u t t= = u in (, X), u t=t = W and u t t=t = Z in (, X), u = = g L and u = = g R in (, T ), (.9) where u and u are given initial conditions defined on (, X), W L 2 (, X) and Z H (, X) are prescribed terminal conditions, V is a given function defined on (, T ) (, X), f is a given function defined on R, and g = (g L, g R ) [L 2 (, T )] 2 is the boundary control. In this chapter we attempt to solve the eact controllability problems by an optimal control approach. Precisely, we consider the following optimal control problem: minimize the cost functional J (u, g) = σ u(t, ) W () 2 d + τ 2 ( g L 2 + g R 2 ) dt u t (T, ) Z() 2 d (.) subject to u tt u + f(u) = V in Q (, T ) (, ) u t= = u and u t t= = u in (, ) u = = g L and u = = g R in (, T ). (.) The shooting algorithms for solving the optimal control problem will be described for the slightly more general functional J (u, g) = α 2 T + τ 2 u U 2 d dt + σ 2 u t (T, ) Z() 2 d + 2 u(t, ) W () 2 d ( g L 2 + g R 2 ) dt (.2)

10 5 where the term involving (u U) reflects our desire to match the candidate state u with a given U in the entire domain Q. Our computational eperiments of the proposed numerical methods will be performed eclusively for the case of α =. In Chapter 5 the linear optimal control problems we study are to minimize the cost functional J (u, f) = α 2 T Ω u U 2 ddt + β 2 Ω u(, T ) W () 2 d + 2 T Ω f 2 ddt. subject to first order linear hyperbolic equation u t + au = f(, t), in (, T ) Ω, (.3) with the boundary condition u(t, ) = z(t), t (, T ), (.4) and the initial condition u(, ) = w(), Ω. (.5) We particularly test some eamples of distributed optimal control problems with known smooth solution and generic initial and boundary data and with a >. Since this thesis covers several topics, the literature and new contributions of the thesis will be discussed in the contet of each chapter.

11 6 2 Shooting Methods for Numerical Solutions of Distributed Optimal Control Problems Constrained by Linear and Nonlinear Wave Equations Numerical solutions of distributed optimal Dirichlet control problems for linear and semilinear wave equations are studied. The optimal control problem is reformulated as a system of equations (an optimality system) that consists of an initial value problem for the underlying (linear or semilinear) wave equation and a terminal value problem for the adjoint wave equation. The discretized optimality system is solved by a shooting method. The convergence properties of the numerical shooting method in the contet of eact controllability are illustrated through computational eperiments. 2. Distributed optimal control problems for the wave equations We will study numerical methods for optimal control and controllability problems associated with the linear and nonlinear wave equations. We are particularly interested in investigating the relevancy and applicability of high performance computing (HPC) for these problems. As an prototype eample of optimal control problems for the wave equations we consider the following distributed optimal control problem: choose a control f and a corresponding u such that the pair (f, u) minimizes the cost

12 7 functional J (u, f) = α 2 T + 2 T K(u) d dt + β Ω 2 f 2 d dt Ω Ω Φ (u(t, )) d + γ 2 Ω Φ 2 (u t (T, )) d (2..) subject to the wave equation u tt u + Ψ(u) = f in (, T ) Ω, u Ω =, u(, ) = w(), u t (, ) = z(). (2..2) Here Ω is a bounded spatial domain in R d (d = or 2 or 3) with a boundary Ω; u is dubbed the state, and g is the distributed control. Also, K, Φ and Ψ are C mappings (for instance, we may choose K(u) = (u U) 2, Ψ(u) =, Ψ(u) = u 3 u and Ψ(u) = e u, Φ (u) = (u(t, ) W ) 2, Φ 2 (u) = (u t (T, ) Z) 2, where U, W, Z is a target function.) Using Lagrange multiplier rules one finds the following optimality system of equations that the optimal solution (f, u) must satisfy: u tt u + Ψ(u) = f in (, T ) Ω u Ω =, u(, ) = w(), u t (, ) = z() ; ξ tt ξ + [Ψ (u)] ξ = α 2 K (u) in Q ξ Ω =, ξ(t, ) = γ 2 Φ 2(u t (T, )), ξ t (T, ) = β 2 Φ (u(t, )) ; f + ξ = in Q. This system may be simplified as u tt u + Ψ(u) = ξ in (, T ) Ω u Ω =, u(, ) = w(), u t (, ) = z() ; ξ tt ξ + [Ψ (u)] ξ = α 2 K (u) in (, T ) Ω (2..3) ξ Ω =, ξ(t, ) = γ 2 Φ 2(u t (T, )), ξ t (T, ) = β 2 Φ (u(t, )).

13 8 Such control problems are classical ones in the control theory literature; see, e.g., [5] for the linear case and [6] for the nonlinear case regarding the eistence of optimal solutions as well as the eistence of a Lagrange multiplier ξ satisfying the optimality system of equations. However, numerical methods for finding discrete (e.g., finite element and/or finite difference) solutions of the optimality system are largely limited to gradient type methods which are sequential in nature and generally require many iterations for convergence. The optimality system involves boundary conditions at t = and t = T and thus cannot be solved by marching in time. Direct solutions of the discrete optimality system, of course, are bound to be epensive computationally in 2 or 3 spatial dimensions since the problem is (d + ) dimensional (where d is the spatial dimensions.) The computational algorithms we propose here are based on shooting methods for two-point boundary value problems for ordinary differential equations (ODEs); see, e.g., [, 2, 3, 4]. The algorithms we propose are well suited for implementations on a parallel computing platform such as a massive cluster of cheap processors. 2.2 The solution of the eact controllability problem as the limit of optimal control solutions The eact distributed control problem we consider is to seek a distributed control f L 2 ((, T ) Ω) and a corresponding state u such that the following system of equations hold: u tt u + Ψ(u) = f in Q (, T ) Ω, u t= = w and u t t= = z in Ω, u t=t = W and u t t=t = Z in Ω, u Ω = in (, T ). (2.2.4) Under suitable assumptions on f and through the use of Lagrange multiplier rules,

14 9 the corresponding optimal control problem: minimize (2..) with respect to the control f subject to (2..2). (2.2.5) In this section we establish the equivalence between the limit of optimal solutions and the minimum distributed L 2 norm eact controller. We will show that if α, β and γ, then the corresponding optimal solution (û αβγ, f αβγ ) converges weakly to the minimum distributed L 2 norm solution of the eact distributed controllability problem (2.2.4). The same is also true in the discrete case. Theorem Assume that the eact distributed controllability problem (2.2.4) admits a unique minimum distributed L 2 norm solution (u e, f e ). Assume that for every (α, β, γ) R + R + R + (where R + is the set of all positive real numbers,) there eists a solution (u αβγ, f αβγ ) to the optimal control problem (2.2.5). Then f αβγ L 2 (Q) f e L 2 (Q) (α, β, γ) R + R + R +. (2.2.6) Assume, in addition, that for a sequence {(α n, β n, γ n )} satisfying α n, β n and γ n, Then u αn β n γ n u in L 2 (Q) and Ψ(u αn β n γ n ) Ψ(u) in L 2 (, T ; [H 2 (Ω) H (Ω)] ). (2.2.7) f αn β n γ n f e in L 2 (Q) and u αn β n γ n u e in L 2 (Q) as n. (2.2.8) Furthermore, if (2.2.7) holds for every sequence {(α n, β n, γ n )} satisfying α n, β n

15 and γ n, then f αβγ f e in L 2 (Q) and u αβγ u e in L 2 (Q) as α, β, γ. (2.2.9) Proof. Since (u αβγ, f αβγ ) is an optimal solution, we have that α 2 u αβγ(t ) U L 2 (Q) + β 2 u αβγ(t ) W L 2 (Ω) + γ 2 tu αβγ (T ) Z H (Ω) + 2 f αβγ L 2 (Q) = J (u αβγ, f αβγ ) J (u e, f e ) = 2 f e L 2 (Q) so that (2.2.6) holds, u αβγ t=t W in L 2 (Ω) and ( t u αβγ ) t=t Z in H (Ω) as α, β, γ. (2.2.) Let {(α n, β n, γ n )} be the sequence in (2.2.7). Estimate (2.2.6) implies that a subsequence of {(α n, β n, γ n )}, denoted by the same, satisfies f αn β n γ n f in L 2 (Q) and f L 2 (Q) f e L 2 (Q). (2.2.) (u αβγ, f αβγ ) satisfies the initial value problem in the weak form: T T u αβγ (v tt v ) d dt + [Ψ(u αβγ ) f αβγ ]v d dt Ω Ω + (v t u αβγ ) t=t d v t= z d (u αβγ t v) t=t d Ω Ω Ω + (w t v) t= d = v C 2 ([, T ]; H 2 (Ω) H(Ω)) Ω (2.2.2) Passing to the limit in (2.2.2) as α, β, γ and using relations (2.2.) and (2.2.)

16 we obtain: T T u(v tt v ) d dt + [Ψ(u) f]v d dt Ω Ω + v t=t Z() d v t= z d W ()( t v) t=t d Ω Ω Ω + (w t v) t= d = v C 2 ([, T ]; H 2 (Ω) H(Ω)). Ω The last relation and (2.2.) imply that (u, f) is a minimum boundary L 2 norm solution to the eact control problem (2.2.4). Hence, u = u e and f = f e so that (2.2.8) and (2.2.9) follows from (2.2.7) and (2.2.). Remark If the wave equation is linear, i.e., Ψ =, then assumption (2.2.7) is redundant and (2.2.9) is guaranteed to hold. Indeed, (2.2.2) implies the boundedness of { u αβγ L 2 (Q)} which in turn yields (2.2.7). The uniqueness of a solution for the linear wave equation implies (2.2.7) holds for an arbitrary sequence {(α n, β n, γ n )}. Theorem Assume that i) for every (α, β, γ) R + R + R + there eists a solution (u αβγ, f αβγ ) to the optimal control problem (2.2.5); ii) the limit terminal conditions hold: u αβγ t=t W in L 2 (Ω) and ( t u αβγ ) t=t Z in H (Ω) as α, β, γ ; (2.2.3) iii) the optimal solution (u αβγ, f αβγ ) satisfies the weak limit conditions as α, β, γ : f αβγ f in L 2 (Q), u αβγ u in L 2 (Q), (2.2.4) and Ψ(u αβγ ) Ψ(u) in L 2 (, T ; [H 2 (Ω) H (Ω)] ) (2.2.5)

17 2 for some f L 2 (Q) and u L 2 (Q). Then (u, f) is a solution to the eact boundary controllability problem (2.2.4) with f satisfying the minimum boundary L 2 norm property. Furthermore, if the solution to (2.2.4) admits a unique solution (u e, f e ), then f αβγ f e in L 2 (Q) and u αβγ u e in L 2 (Q) as α, β, γ. (2.2.6) Proof. (u αβγ, f αβγ ) satisfies (2.2.2). Passing to the limit in that equation as α, β, γ and using relations (2.2.3), (2.2.4) and (2.2.5) we obtain: T T u(v tt v ) d dt + [Ψ(u) f]v d dt Ω Ω + v t=t Z() d v t= z d W ()( t v) t=t d Ω Ω Ω + (w t v) t= d = v C 2 ([, T ]; H 2 (Ω) H(Ω)). Ω This implies that (u, f) is a solution to the eact boundary controllability problem (2.2.4). To prove that f satisfies the minimum boundary L 2 norm property, we proceeds as follows. Let (u e, f e ) denotes a eact minimum boundary L 2 norm solution to the controllability problem (2.2.4). Since (u αβγ, f αβγ ) is an optimal solution, we have that α 2 u αβγ U 2 L 2 (Q) + β 2 u αβγ W 2 L 2 (Ω) + γ 2 tu αβγ Z 2 H (Ω) + 2 f αβγ 2 L 2 (Q) = J (u αβγ, f αβγ ) J (u e, f e ) = 2 f e 2 L 2 (Q) so that f αβγ 2 L 2 (Q) f e 2 L 2 (Q).

18 3 Passing to the limit in the last estimate we obtain f 2 L 2 (Q) f e 2 L 2 (Q). (2.2.7) Hence we conclude that (u, f) is a minimum boundary L 2 norm solution to the eact boundary controllability problem (2.2.4). Furthermore, if the eact controllability problem (2.2.4) admits a unique minimum boundary L 2 norm solution (u e, f e ), then (u, f) = (u e, f e ) and (2.2.6) follows from assumption (2.2.4). Remark If the wave equation is linear, i.e., Ψ =, then assumptions i) and (2.2.5) are redundant. Remark Assumptions ii) and iii) hold if f αβγ and u αβγ converges pointwise as α, β, γ. Remark A practical implication of Theorem is that one can prove the eact controllability for semilinear wave equations by eamining the behavior of a sequence of optimal solutions (recall that eact controllability was proved only for some special classes of semilinear wave equations.) If we have found a sequence of optimal control solutions {(u αn β n γ n, f αn β n γ n )} where α n, β n, γ n and this sequence appears to satisfy the convergence assumptions ii) and iii), then we can confidently conclude that the underlying semilinear wave equation is eactly controllable and the optimal solution (u αnβnγ n, f αnβnγ n ) when n is large provides a good approimation to the minimum boundary L 2 norm eact controller (u e, f e ). 2.3 Shooting methods for D control problems The basic idea for a shooting method is to convert the solution of a two-point boundary value problem into that of an initial value problem (IVP). The IVP corresponding

19 4 to the optimality system (4.3.8) is described by u tt u + Ψ(u) = ξ in (, T ) Ω, u Ω =, u(, ) = w(), u t (, ) = z() ; ξ tt ξ + [Ψ (u)] ξ = α 2 K (u) in (, T ) Ω, (2.3.8) ξ Ω =, ξ(, ) = ω(), ξ t (, ) = θ(), with unknown initial values ω and θ. Then the goal is to choose ω and θ such that the solution (u, ξ) of the IVP (4.3.9) satisfies the terminal conditions ξ(t, ) = γ 2 Φ 2(u t (T, )) and ξ t (T, ) = β 2 Φ (u(t, )). (2.3.9) A shooting method for solving (4.3.8) consists of the following main steps: choose initial guesses ω and θ; for m =, 2,, M solve for (u, ξ) from the IVP (4.3.8) update ω and θ. A criterion for updating (ω, θ) can be derived from the terminal conditions (4.3.2). A method for solving the nonlinear system (4.3.2) (as a system for the unknowns ω and θ) will yield an updating formula and here we invoke the well-known Newton s method to do so. Also, a discrete version of the algorithm must be used in actual implementations. For the ease of eposition we describe in details a Newton s-method-based shooting algorithm with finite difference discretizations in one space dimension. Algorithms in higher dimensions and their implementations will be briefly discussed subsequently and will form an prominent part of the proposed research.

20 Algorithm for D control problems In one dimension, we discretize the spatial interval [, X] into = < < 2 < 3 < < I+ = X with a uniform spacing h = X/(I + ) and we divide the time horizon [, T ] into = t < t 2 < t 3 < < t N = T with a uniform time step length δ = T/(N ). We use the central differencing scheme to approimate the initial value problem (4.3.9): u i = w i, u 2 i = w i + δz i, ξ i = ω i, ξ 2 i = ξ i + δθ i, i =, 2,, I ; u n+ i ξ n+ i = u n i + λu n i + 2( λ)u n i + λu n i+ δ 2 ξ n i δ 2 Ψ(u n i ), i =, 2,, I, = ξ n i + λξ n i + 2( λ)ξ n i + λξ n i+ + δ 2 α 2 K(un i ) δ 2 [Ψ (u n i )] ξ n i, i =, 2,, I ; (2.3.2) where λ = (δ/h) 2 (we also use the convention that u n = ξ n = u n I+ = ξn I+ =.) The gists of a discrete shooting method are to regard the discrete terminal conditions F 2i ξn i i =, 2,, I. ξ N i δ + β 2 Φ (u N i ) =, F 2i ξi N γ 2 Φ 2( un i u N i ) =, δ (2.3.2) By denoting q n ij = q n ij(ω, θ, ω 2, θ 2,, ω I, θ I ) = un i ω j, ρ n ij = ρ n ij(ω, θ, ω 2, θ 2,, ω I, θ I ) = ξn i ω j, r n ij = r n ij(ω, θ, ω 2, θ 2,, ω I, θ I ) = un i θ j, τ n ij = τ n ij(ω, θ, ω 2, θ 2,, ω I, θ I ) = ξn i θ j,

21 6 we may write Newton s iteration formula as (ω new, θ new, ω new 2, θ new 2,, ωi new, θi new ) T = (ω, θ, ω 2, θ 2,, ω I, θ I ) T [F (ω, θ, ω 2, θ 2,, ω I, θ I )] F (ω, θ, ω 2, θ 2,, ω I, θ I ) where the vector F and Jacobian matri J = F are defined by F 2i = ξn i ξ N i δ J 2i,2j = ρn ij ρ N ij δ J 2i,2j = ρ N ij γ 2δ Φ 2( un i u N i δ J 2i,2j = τij N γ 2δ Φ 2( un i u N i δ + β 2 Φ (u N i ), F 2i = ξi N γ 2 Φ 2( un i u N i ), δ + β 2 Φ (u N i )qij N, J 2i,2j = τ ij N τ N ij )(q N ij q N ij ), )(r N ij r N ij ). δ + β 2 Φ (u N i )rij N, Moreover, by differentiating (4.4.2) with respect to ω j and θ j we obtain the equations for determining q ij, r ij, ρ ij and τ ij : q ij =, q 2 ij =, r ij =, r 2 ij =, ρ ij = δ ij, ρ 2 ij = δ ij, τ ij =, τ 2 ij = δδ ij, i, j =, 2,, I ; q n+ ij r n+ ij ρ n+ ij τ n+ ij = q n ij = r n ij = ρ n ij + λq n i,j + 2( λ)q n ij + λq n i+,j δ 2 ρ n ij δ 2 Ψ (u n i )q n ij, i, j =, 2,, I, + λr n i,j + 2( λ)r n ij + λr n i+,j δ 2 τ n ij δ 2 Ψ (u n i )r n ij, i, j =, 2,, I, + λρ n i,j + 2( λ)ρ n ij + λρ n i+,j + δ 2 α 2 K (u n i )q N ij δ 2 [Ψ (u n i )] ρ n i δ 2 [Ψ (u n i )q N i ] ξ n i, i, j =, 2,, I ; = τ n ij + λτ n i,j + 2( λ)τ n ij + λτ n i+,j + δ 2 α 2 K (u n i )r N ij δ 2 [Ψ (u n i )] ρ n i δ 2 [Ψ (u n i )r N i ] ξ n i, i, j =, 2,, I ; (2.3.22)

22 7 where δ ij is the Chronecker delta. Thus, we have the following Newton s-method-based shooting algorithm: Algorithm Newton method based shooting algorithm with Euler discretizations for distributed optimal control problems choose initial guesses ω i and θ i, i =, 2,, I; % set initial conditions for u and ξ for i =, 2,, I u i = w i, u 2 i = w i + δz i, ξ i = ω i, ξ 2 i = ξ i + δθ i ; % set initial conditions for q ij, r ij, ρ ij, τ ij for j =, 2,, I for i =, 2,, I q ij =, q 2 ij =, r ij =, r 2 ij =, ρ ij =, ρ 2 ij =, τ ij =, τ 2 ij = ; ρ jj =, ρ 2 ij =, τ 2 jj = δ, % Newton iterations for m =, 2,, M % solve for (u, ξ) for n = 2, 3,, N u n+ i ξ n+ i = u n i + λu n i + 2( λ)u n i + λu n i+ δ 2 ξ n i δ 2 Ψ(u n i ), = ξ n i +λξ n i +2( λ)ξ n i +λξ n i++δ 2 α 2 K(un i ) δ 2 [Ψ (u n i )] ξ n i ; % solve for q, r, ρ, τ for j =, 2,, I for n = 2, 3,, N for i = 2,, N q n+ ij = q n ij + λq n i,j + 2( λ)q n ij + λq n i+,j

23 8 r n+ ij ρ n+ ij τ n+ ij % evaluate F and F for i =, 2,, I F 2i = ξn i for j =, 2,, I δ 2 ρ n ij δ 2 Ψ (u n i )q n ij, = r n ij + λr n i,j + 2( λ)r n ij + λr n i+,j δ 2 τ n ij δ 2 Ψ (u n i )r n ij, = ρ n ij + λρ n i,j + 2( λ)ρ n ij + λρ n i+,j +δ 2 α 2 K (u n i )q N ij δ 2 [Ψ (u n i )] ρ n i δ 2 [Ψ (u n i )q N i ] ξ n i, = τ n ij + λτ n i,j + 2( λ)τ n ij + λτ n i+,j +δ 2 α 2 K (u n i )r N ij δ 2 [Ψ (u n i )] ρ n i δ 2 [Ψ (u n i )r N i ] ξ n i ; % (we need to build into the algorithm the following: q n = r n = ρ n = τ n =, q n I+ = rn I+ = ρn I+ = τ n I+ =.) ξn i δ + β 2 Φ (u N i ), F 2i = ξ N i ; J 2i,2j = ρn ij ρn ij + β δ 2 Φ (u N i )qij N, J 2i,2j = τ ij N N τij + β δ 2 Φ (u N i )rij N, J 2i,2j = ρ N ij γ 2δ Φ J 2i,2j = τ N ij γ 2δ Φ 2( un i un i δ 2( un i un i δ solve Jc = F by Gaussian eliminations; for i =, 2,, I ω new i = ω i + c 2i, θ new i = θ i + c 2i ; if ma i ω new i ω i + ma i θ new i otherwise, reset ω i = ω new i )(q N ij q N ij ), )(r N ij r N ij ); θ i < tol, stop; and θ i = θ new i, i =, 2,, I;

24 D Computational results We consider eamples of the following types : Seek the pair (u, f) that minimizes the cost functional J (u, f) = α 2 T Ω u U 2 d dt + β 2 Ω u(t, ) W () 2 d + 2 T Ω f 2 d dt subject to the wave equation u tt u + Ψ(u) = f in (, T ) Ω, u Ω =, u(, ) = g(), u t (, ) = h(). (2.3.23) Eample (linear case) Ψ(u) =, T =, Ω = [, ]. For given target functions, W () = sin(2π) sin(2πt ), U(t, ) = sin(2π) sin(2πt), (2.3.24) it can be verified that the eact optimal solution u and a corresponding Lagrange multiplier ξ are determined by (4.3.8). u(t, ) = sin(2π) sin(2πt), ξ(t, ) =. (2.3.25)

25 2 t = t = t = t= Figure 2. Optimal solution u and target W, U for t = /4 and = /2 : optimal solution u(t, ) : target functions W (), U(t, ) : eact optimal solution α =, β =

26 2 t = t = t = t= Figure 2.2 Optimal solution u and target W, U for t = /4 and = /2 :optimal solution u(t, ) : target functions W (), U(t, ) : eact optimal solution α =, β =

27 22 Eample (linear case) Ψ(u) =, T =, Ω = [, ]. For given target functions, W () =, U(t, ) =..4.2 t = t = t = t= Figure 2.3 Optimal solution u and target W, U for t = / and = /5 : optimal solution u(t, ) : target functions W (), U(t, ) α =, β =

28 23 Eample (nonlinear case) Ψ(u) = u 3 u, T =, Ω = [, ]. For given target functions, W () = sin π cos πt U(t, ) = sin π cos πt. (2.3.26).8.2 t = t = t = t=..4 Figure 2.4 Optimal solution u and target W, U for t = / and = /5 : optimal solution u(t, ) : target functions W (), U(t, ) α =, β =

29 24 Eample (nonlinear case) Ψ(u) = e u, T =, Ω = [, ]. For given target functions, W () = sin π cos T U(t, ) = sin π cos t. (2.3.27) t =.24.2 t = t=..2 t = Figure 2.5 Optimal solution u and target W, U for t = / and = /5 : optimal solution u(t, ) : target functions W (), U(t, ) α =, β =

30 25 Eample (Sine-Gordon equation) Ψ(u) = sin u, T =, Ω = [, ]. For given target functions, W () = sin π cos T U(t, ) = sin π cos t. (2.3.28) t =.24.2 t = t = t=. Figure 2.6 Optimal solution u and target W, U for t = / and = /5 : optimal solution u(t, ) : target functions W (), U(t, ) α =, β =

31 Shooting methods in 2D control problems The basic idea for a shooting method in 2D is eactly the same as in D. In two dimensional case, we replace u by u. Lagrange multiplier rules provide the same optimality system of equations as in the previous section ecept the replacement part. Thus the IVP corresponding to the optimality system (4.3.8) is described by u tt u + Ψ(u) = ξ in (, T ) Ω, u Ω =, u(, ) = w(), u t (, ) = z() ; ξ tt ξ + [Ψ (u)] ξ = α 2 K (u) in (, T ) Ω, (2.4.29) ξ Ω =, ξ(, ) = ω(), ξ t (, ) = θ(), with unknown initial values ω and θ. Then the goal is to choose ω and θ such that the solution (u, ξ) of the IVP (2.4.29) satisfies the terminal conditions ξ(t, ) = γ 2 Φ 2(u t (T, )) and ξ t (T, ) = β 2 Φ (u(t, )). (2.4.3) 2.4. Algorithm for 2D control problems In two dimension, we discretize the spatial interval [, X], [, Y ] into = < < 2 < 3 < < I+ = X, = y < y < y 2 < y 3 < < y J+ = Y with a uniform spacing h = X/(I+), h y = Y/(J +) respectively, and we divide the time horizon [, T ] into = t < t 2 < t 3 < < t N = T with a uniform time step length δ = T/(N ). We use the central difference scheme to approimate the initial value problem (2.4.29): For i =, 2,, I, j =, 2,, J, u ij = w ij, u 2 ij = w ij + δz ij, ξ ij = ω ij, ξ 2 ij = ξ ij + δθ ij ; u n+ ij = u n ij + λ (u n i,j + u n i+,j) + 2( λ λ y )u n ij + λ y (u n i,j + u n i,j+) δ 2 ξ n ij δ 2 Ψ(u n ij), (2.4.3) ξ n+ ij = ξ n ij + λ (ξ n i,j + ξ n i+,j) + 2( λ λ y )ξ n ij + λ y (ξ n i,j + ξ n i,j+) + δ 2 α 2 K(un ij) δ 2 [Ψ (u n ij)] ξ n ij,

32 27 where λ = (δ/h ) 2, λ y = (δ/h y ) 2 (we also use the convention that u n ij = ξ n ij = if i = or I + or j = or j = J +.) The gists of a discrete shooting method are to regard the discrete terminal conditions: For i =, 2,, I, j =, 2,, J, F 2(ij) ξn ij ξ N ij δ + β 2 Φ (u N ij ) =, F 2(ij) ξij N γ 2 Φ 2( un ij u N ij ) =, (2.4.32) δ where (ij) is a reordering of the nodes with respect to X, Y ecept boundary points. Let ω() {ω, ω 2,, ω IJ } = {ω, ω 2,, ω IJ }, and θ() {θ, θ 2,, θ IJ } = {θ, θ 2,, θ IJ } where IJ = I J. By denoting q n (ij) k = q n (ij) k(ω, θ, ω 2, θ 2,, ω IJ, θ IJ ) = un ij ω k, r n (ij) k = r n (ij) k(ω, θ, ω 2, θ 2,, ω IJ, θ IJ ) = un ij θ k, ρ n (ij) k = ρ n (ij) k(ω, θ, ω 2, θ 2,, ω IJ, θ IJ ) = ξn ij ω k, τ n (ij) k = τ n (ij) k(ω, θ, ω 2, θ 2,, ω IJ, θ IJ ) = ξn ij θ k, we may write Newton s iteration formula as (ω new,θ new, ω new 2, θ new 2,, ω new IJ, θnew IJ )T = (ω, θ, ω 2, θ 2,, ω IJ, θ IJ ) T [F (ω, θ, ω 2, θ 2,, ω IJ, θ IJ )] F (ω, θ, ω 2, θ 2,, ω IJ, θ IJ ) where the vector F and Jacobian matri J = F are defined by F 2(ij) ξn ij ξ N ij δ + β 2 Φ (u N ij ), F 2(ij) ξij N γ 2 Φ 2( un ij u N ij ), δ J 2(ij),2k = ρn (ij) ρ N k (ij) k + β δ 2 Φ (u N ij )q(ij) N, k J 2(ij),2k = τ (ij) N τ N k (ij) k + β δ 2 Φ (u N ij )r(ij) N, k J 2(ij),2k = ρ N (ij) γ k 2δ Φ 2( un ij u N ij δ J 2(ij),2k = τ(ij) N γ k 2δ Φ 2( un ij u N ij δ )(q N (ij) k qn (ij) k ), )(r N (ij) k rn (ij) k ).

33 28 Moreover, by differentiating (2.4.3) with respect to ω k and θ k we obtain the equations for determining q (ij) k, r (ij) k, ρ (ij) k and τ (ij) k: q (ij) k =, q2 (ij) k =, r (ij) k =, r2 (ij) k =, ρ (ij) k = δ (ij) k, ρ 2 (ij) k = δ (ij) k, i, j =, 2,, I, J ; τ (ij) k =, τ 2 (ij) k = δδ (ij) k, q n+ (ij) k = qn (ij) k + λ (q n (i,j) k + q n (i+,j) k) + 2( λ λ y )q n (ij) k + λ y (q n (i,j ) k + q n (i,j+) k) δ 2 ρ n (ij) k δ 2 Ψ (u n ij)q n (ij) k, i, j =, 2,, I, J, r n+ (ij) k = rn (ij) k + λ (r n (i,j) k + r n (i+,j) k) + 2( λ λ y )r n (ij) k + λ y (r n (i,j ) k + r n (i,j+) k) δ 2 τ n (ij) k (2.4.33) δ 2 Ψ (u n ij)r n (ij) k, i, j =, 2,, I, J, ρ n+ (ij) k = ρn (ij) k + λ (ρ n (i,j) k + ρ n (i+,j) k) + 2( λ λ y )ρ n (ij) k + λ y (ρ n (i,j ) k + ρ n (i,j+) k) + δ 2 α 2 K (u n ij)q N (ij) k δ 2 [Ψ (u n i )] ρ n (ij) k δ 2 [Ψ (u n i )q N (ij) k] ξ n ij, i, j =, 2,, I, J ; τ n+ n (ij) k = τ(ij) k + λ (τ(i,j) n k + τ(i+,j) n k) + 2( λ λ y )τ(ij) n k + λ y (τ n (i,j ) k + τ n (i,j+) k) + δ 2 α 2 K (u n ij)r N (ij) k δ 2 [Ψ (u n ij)] ρ n (ij) k δ 2 [Ψ (u n ij)r N (ij) k] ξ n ij, i, j =, 2,, I, J ; where δ ij is the Chronecker delta. Thus, we have the following Newton s-method-based shooting algorithm: Algorithm 2 Newton method based shooting algorithm with Euler discretizations for distributed optimal control problems choose initial guesses ω ij and θ ij, i, j =, 2,, I, J;

34 29 % set initial conditions for u and ξ for i, j =, 2,, I, J u ij = w ij, u 2 ij = w ij + δz ij, ξ ij = ω ij, ξ 2 ij = ξ ij + δθ ij ; % set initial conditions for q (ij) k, r (ij) k, ρ (ij) k, τ (ij) k for k =, 2,, (IJ) for i =, 2,, I for j =, 2,, J % Newton iterations for m =, 2,, M q (ij) k =, q2 (ij) k =, r (ij) k =, r2 (ij) k =, ρ (ij) k =, ρ2 (ij) k =, τ (ij) k =, τ 2 (ij) k = ; ρ kk =, ρ2 (ij) k =, τ 2 kk = δ, % solve for (u, ξ) for n = 2, 3,, N u n+ ij ξ n+ ij = u n ij + λ (u n i,j + u n i+,j) + 2( λ λ y )u n ij +λ y (u n i,j + u n i,j+), = ξ n ij +λ (ξi,j n +ξi+,j)+2( λ n λ y )ξij n +λ y (ξi,j +ξ n i,j+) n % solve for q, r, ρ, τ for j =, 2,, I +δ 2 α 2 K(un ij) δ 2 [Ψ (u n ij)] ξ n ij; for n = 2, 3,, N for i = 2,, N q n+ (ij) k = qn (ij) k + λ (q n (i,j) k + qn (i+,j) k ) +2( λ λ y )q n (ij) k + λ y(q n (i,j ) k + qn (i,j+) k ) δ 2 ρ n (ij) k δ2 Ψ (u n ij)q n (ij) k,

35 3 % evaluate F and F for i, j =, 2,, I, J r n+ (ij) k = rn (ij) k + λ (r n (i,j) k + rn (i+,j) k ) +2( λ λ y )r n (ij) k + λ y(r n (i,j ) k + rn (i,j+) k ) δ 2 τ n (ij) k δ2 Ψ (u n ij)r n (ij) k, ρ n+ (ij) k = ρn (ij) + λ (ρ n (i,j) k + ρn (i+,j) k ) +2( λ λ y )ρ n (ij) k + λ y(ρ n (i,j ) k + ρn (i,j+) k ) +δ 2 α 2 K (u n ij)q N (ij) k δ2 [Ψ (u n i )] ρ n ij δ 2 [Ψ (u n i )q N ij ] ξ n ij τ n+ n (ij) k = τ(ij) k + λ (τ(i,j) n k + τ (i+,j) n k ) +2( λ λ y )τ n (ij) k + λ y(τ n (i,j ) k + τ n (i,j+) k ) +δ 2 α 2 K (u n ij)r N (ij) k δ2 [Ψ (u n ij)] ρ n ij δ 2 [Ψ (u n ij)r N i ] ξ n ij % (we need to build into the algorithm the following: q n = r n = ρ n = τ n =, q n I+ = rn I+ = ρn I+ = τ n I+ =.) F 2(ij) ξn ij ξn ij + β δ 2 Φ (u N ij ), F 2(ij) ξij N ; for j =, 2,, I J 2(ij),2k = ρn (ij) k ρn (ij) k δ + β 2 Φ (u N ij )q(ij) N k, J 2(ij),2k = τ (ij) N N τ k (ij) k + β δ 2 Φ (u N ij )r(ij) N k, J 2(ij),2k = ρ N (ij) k γ 2δ Φ J 2(ij),2k = τ N (ij) k γ 2δ Φ 2( un ij un ij )(q N δ (ij) k 2( un ij un ij )(r N δ (ij) k solve Jc = F by Gaussian eliminations; for i, j =, 2,, I, J ω new ij = ω ij + c 2(ij), θ new ij = θ ij + c 2(ij) ; if ma ij ω new ij ω ij + ma ij θ new ij otherwise, reset ω ij = ω new ij θ ij < tol, stop; q N (ij) k ), r N (ij) k ); and θ ij = θ new ij, i, j =, 2,, I, J;

36 D computational results For the following eamples, we provide several graphs with fied time and y-coordinates in order to observe the computational results easily, so called snap-shot. For fied time, we present the three graphs with y-coordinates.25,.5, and.75 from left to right. Eample (Linear case) Ψ(u) =, T =, Ω = [, ] [, ]. For given target functions, W (, y) = ( )y(y ) cos, U(t,, y) = ( )y(y ) cos t. (2.4.34) t = t = t = t = Figure 2.7 Optimal solution u and target W, U for t = /36, = /6 and y = /6 : optimal solution u(t, ) : target functions W (), U(t, ) α =, β =

37 t = t = t = t = Figure 2.8 Optimal solution u and targets W, U for t = /36, = /6 and y = /6 : optimal solution u(t, ) : target functions W (), U(t, ) α =, β =

38 33 Eample (Nonlinear case) Ψ(u) = u 3 u, T =, Ω = [, ] [, ]. For given target functions, W (, y) = cos π sin π sin πy U(t,, y) = cos πt sin π sin πy. (2.4.35) t = t = t = t = Figure 2.9 Optimal solution u and target W, U for t = /36, = /6 and y = /6 : optimal solution u(t, ) : target functions W (), U(t, ) α =, β =

39 34 3 Shooting Methods for Numerical Solutions of Eact Controllability Problems Constrained by Linear and Nonlinear Wave Equations with Local Distributed Controls Numerical solutions of eact controllability problems for linear and semilinear wave equations with local distributed controls are studied. In chapter, we introduced the optimality system of equations and the corresponding algorithm for shooting method. For the nonhomogeneous term in the state equation, we multiply a characteristic function which will render the problems local controllability cases. The algorithm for these problems is a simple modification of the algorithm in chapter 2, and so we will skip the section of the algorithm. The convergence properties of the numerical shooting method in the contet of eact controllability are illustrated through computational eperiments. 3. Eact controllability problems for the wave equations We will study numerical methods for optimal control and eact controllability problems with local distributed controls associated with the linear and nonlinear wave equations. As before, our concern is to investigate the relevancy and applicability of high performance computing (HPC) for these problems. As an prototype eample of optimal control problems for the wave equations with local distributed controls, we consider the following distributed optimal control problem: choose a control f and a corresponding u such that the pair (f, u) minimizes the cost

40 35 functional J (u, f) = α 2 = α 2 T + 2 T T + 2 K(u) d dt + β Φ (u(t, )) d + γ Ω 2 Ω 2 f 2 d dt Ω T K(u) d dt + β Ω 2 χ Ω f 2 d dt Ω Ω Φ (u(t, )) d + γ 2 Ω Ω Φ 2 (u t (T, )) d Φ 2 (u t (T, )) d (3..) subject to the wave equation u tt u + Ψ(u) = χ Ω f in (, T ) Ω, u Ω =, u(, ) = w(), u t (, ) = z(). (3..2) Here Ω is a bounded spatial domain in R d (d = or 2 or 3) with a boundary Ω and Ω Ω; u is dubbed the state, and g is the distributed control. Also, K, Φ and Ψ are C mappings (for instance, we may choose K(u) = (u U) 2, Ψ(u) =, Ψ(u) = u 3 u and Ψ(u) = sin u, Φ (u) = (u(t, ) W ) 2, Φ 2 (u) = (u t (T, ) Z) 2, where U, W, Z is a target function.) Using Lagrange multiplier rules one finds the following optimality system of equations that the optimal solution (f, u) must satisfy: u tt u + Ψ(u) = χ Ω f in (, T ) Ω u Ω =, u(, ) = w(), u t (, ) = z() ; ξ tt ξ + [Ψ (u)] ξ = α 2 K (u) in Q ξ Ω =, ξ(t, ) = γ 2 Φ 2(u t (T, )), ξ t (T, ) = β 2 Φ (u(t, )) ; χ Ω f + χ Ω ξ = in Q.

41 36 This system may be simplified as u tt u + Ψ(u) = χ Ω ξ in (, T ) Ω u Ω =, u(, ) = w(), u t (, ) = z() ; ξ tt ξ + [Ψ (u)] ξ = α 2 K (u) in (, T ) Ω (3..3) ξ Ω =, ξ(t, ) = γ 2 Φ 2(u t (T, )), ξ t (T, ) = β 2 Φ (u(t, )). Such control problems are classical ones in the control theory literature; see, e.g., [5] for the linear case and [6] for the nonlinear case regarding the eistence of optimal solutions as well as the eistence of a Lagrange multiplier ξ satisfying the optimality system of equations. However, numerical methods for finding discrete (e.g., finite element and/or finite difference) solutions of the optimality system are largely limited to gradient type methods which are sequential in nature and generally require many iterations for convergence. The optimality system involves boundary conditions at t = and t = T and thus cannot be solved by marching in time. Direct solutions of the discrete optimality system, of course, are bound to be epensive computationally in 2 or 3 spatial dimensions since the problem is (d + ) dimensional (where d is the spatial dimensions.) 3.2 The solution of the eact local controllability problem as the limit of optimal control solutions The eact distributed local control problem we consider is to seek a distributed local control f L 2 ((, T ) Ω ) where Ω Ω and a corresponding state u such that the

42 37 following system of equations hold: u tt u + Ψ(u) = χ Ω f in Q (, T ) Ω, u t= = w and u t t= = z in Ω, u t=t = W and u t t=t = Z in Ω, u Ω = in (, T ). (3.2.4) Under suitable assumptions on f and through the use of Lagrange multiplier rules, the corresponding optimal local control problem: minimize (3..) with respect to the control f subject to (3..2). (3.2.5) In this section we establish the equivalence between the limit of optimal solutions and the minimum distributed L 2 norm eact local controller. We will show that if α, β and γ, then the corresponding optimal solution (û αβγ, f αβγ ) converges weakly to the minimum distributed L 2 norm solution of the eact distributed local controllability problem (3.2.4). The same is also true in the discrete case. Let Q = (, T ) Ω. Theorem Assume that the eact distributed local controllability problem (3.2.4) admits a unique minimum distributed L 2 norm solution (u e, f e ). Assume that for every (α, β, γ) R + R + R + (where R + is the set of all positive real numbers,) there eists a solution (u αβγ, f αβγ ) to the optimal local control problem (3.2.5). Then f αβγ L 2 (Q ) f e L 2 (Q ) (α, β, γ) R + R + R +. (3.2.6) Assume, in addition, that for a sequence {(α n, β n, γ n )} satisfying α n, β n

43 38 and γ n, Then u αn β n γ n u in L 2 (Q) and Ψ(u αn β n γ n ) Ψ(u) in L 2 (, T ; [H 2 (Ω) H (Ω)] ). (3.2.7) f αnβ nγ n f e in L 2 (Q ) and u αnβ nγ n u e in L 2 (Q) as n. (3.2.8) Furthermore, if (3.2.7) holds for every sequence {(α n, β n, γ n )} satisfying α n, β n and γ n, then f αβγ f e in L 2 (Q ) and u αβγ u e in L 2 (Q) as α, β, γ. (3.2.9) Proof. Since (u αβγ, f αβγ ) is an optimal solution, we have that α 2 u αβγ(t ) U L 2 (Q) + β 2 u αβγ(t ) W L 2 (Ω) + γ 2 tu αβγ (T ) Z H (Ω) + 2 f αβγ L 2 (Q ) = J (u αβγ, f αβγ ) J (u e, f e ) = 2 f e L 2 (Q ) so that (3.2.6) holds, u αβγ t=t W in L 2 (Ω) and ( t u αβγ ) t=t Z in H (Ω) as α, β, γ. (3.2.) Let {(α n, β n, γ n )} be the sequence in (3.2.7). Estimate (3.2.6) implies that a subsequence of {(α n, β n, γ n )}, denoted by the same, satisfies f αn β n γ n f in L 2 (Q ) and f L 2 (Q ) f e L 2 (Q ). (3.2.)

44 39 (u αβγ, f αβγ ) satisfies the initial value problem in the weak form: T T u αβγ (v tt v ) d dt + [Ψ(u αβγ ) χ Ω f αβγ ]v d dt Ω Ω + (v t u αβγ ) t=t d v t= z d (u αβγ t v) t=t d Ω Ω Ω + (w t v) t= d = v C 2 ([, T ]; H 2 (Ω) H(Ω)) Ω (3.2.2) Passing to the limit in (3.2.2) as α, β, γ and using relations (3.2.) and (3.2.) we obtain: T T u(v tt v ) d dt + [Ψ(u) χ Ω f]v d dt Ω Ω + v t=t Z() d v t= z d W ()( t v) t=t d Ω Ω Ω + (w t v) t= d = v C 2 ([, T ]; H 2 (Ω) H(Ω)). Ω The last relation and (3.2.) imply that (u, f) is a minimum boundary L 2 norm solution to the eact control problem (3.2.4). Hence, u = u e and f = f e so that (3.2.8) and (3.2.9) follows from (3.2.7) and (3.2.). Remark If the wave equation is linear, i.e., Ψ =, then assumption (3.2.7) is redundant and (3.2.9) is guaranteed to hold. Indeed, (3.2.2) implies the boundedness of { u αβγ L 2 (Q)} which in turn yields (3.2.7). The uniqueness of a solution for the linear wave equation implies (3.2.7) holds for an arbitrary sequence {(α n, β n, γ n )}. Theorem Assume that i) for every (α, β, γ) R + R + R + there eists a solution (u αβγ, f αβγ ) to the optimal control problem (3.2.5);

45 4 ii) the limit terminal conditions hold: u αβγ t=t W in L 2 (Ω) and ( t u αβγ ) t=t Z in H (Ω) as α, β, γ ; (3.2.3) iii) the optimal solution (u αβγ, f αβγ ) satisfies the weak limit conditions as α, β, γ : f αβγ f in L 2 (Q ), u αβγ u in L 2 (Q), (3.2.4) and Ψ(u αβγ ) Ψ(u) in L 2 (, T ; [H 2 (Ω) H (Ω)] ) (3.2.5) for some f L 2 (Q ) and u L 2 (Q). Then (u, f) is a solution to the eact boundary controllability problem (3.2.4) with f satisfying the minimum boundary L 2 norm property. Furthermore, if the solution to (3.2.4) admits a unique solution (u e, f e ), then f αβγ f e in L 2 (Q ) and u αβγ u e in L 2 (Q) as α, β, γ. (3.2.6) Proof. (u αβγ, f αβγ ) satisfies (3.2.2). Passing to the limit in that equation as α, β, γ and using relations (3.2.3), (3.2.4) and (3.2.5) we obtain: T T u(v tt v ) d dt + [Ψ(u) χ Ω f]v d dt Ω Ω + v t=t Z() d v t= z d W ()( t v) t=t d Ω Ω Ω + (w t v) t= d = v C 2 ([, T ]; H 2 (Ω) H(Ω)). Ω This implies that (u, f) is a solution to the eact boundary controllability problem (3.2.4).

46 4 To prove that f satisfies the minimum boundary L 2 norm property, we proceeds as follows. Let (u e, f e ) denotes a eact minimum boundary L 2 norm solution to the controllability problem (3.2.4). Since (u αβγ, f αβγ ) is an optimal solution, we have that α 2 u αβγ U 2 L 2 (Q) + β 2 u αβγ W 2 L 2 (Ω) + γ 2 tu αβγ Z 2 H (Ω) + 2 f αβγ 2 L 2 (Q ) = J (u αβγ, f αβγ ) J (u e, f e ) = 2 f e 2 L 2 (Q ) so that f αβγ 2 L 2 (Q ) f e 2 L 2 (Q ). Passing to the limit in the last estimate we obtain f 2 L 2 (Q ) f e 2 L 2 (Q ). (3.2.7) Hence we conclude that (u, f) is a minimum boundary L 2 norm solution to the eact boundary controllability problem (3.2.4). Furthermore, if the eact controllability problem (3.2.4) admits a unique minimum boundary L 2 norm solution (u e, f e ), then (u, f) = (u e, f e ) and (3.2.6) follows from assumption (3.2.4). Remark If the wave equation is linear, i.e., Ψ =, then assumptions i) and (3.2.5) are redundant. Remark Assumptions ii) and iii) hold if f αβγ and u αβγ converges pointwise as α, β, γ. Remark A practical implication of Theorem is that one can prove the eact controllability for semilinear wave equations by eamining the behavior of a sequence of optimal solutions (recall that eact controllability was proved only for some special classes of semilinear wave equations.) If we have found a sequence of optimal

47 42 control solutions {(u αn β n γ n, f αn β n γ n )} where α n, β n, γ n and this sequence appears to satisfy the convergence assumptions ii) and iii), then we can confidently conclude that the underlying semilinear wave equation is eactly controllable and the optimal solution (u αnβnγ n, f αnβnγ n ) when n is large provides a good approimation to the minimum boundary L 2 norm eact controller (u e, f e ). 3.3 Computational results We consider eamples of the following types : Seek the pair (u, f) that minimizes J (u, f) = α 2 T + γ 2 Ω Ω u U 2 d dt + β u(t, ) W () 2 d 2 Ω u t (T, ) Z() 2 d + T f 2 d dt 2 Ω (3.3.8) subject to the wave equation u tt u + Ψ(u) = χ Ω f in (, T ) Ω, u Ω =, (3.3.9) u(, ) = g(), u t (, ) = h(), where Ω Ω Eact controllability problems with linear cases Eample (full domain control) Ψ(u) =, T =, Ω = Ω = [, ]. For given target functions, W () =, Z() = 2π sin(2π), U(t, ) = sin(2π) sin(2πt). (3.3.2)

48 43.5 t= β=γ= β=γ= β=γ= β=γ=.5.5 Figure 3. Optimal solution u and target W for t = /8 and = /4, : optimal solution u(t, ) : target function W () α =, β = γ =,, 9 β=γ= t= β=γ= β=γ= 9.5 Figure 3.2 Optimal solution u t and target Z for t = /8 and = /4, : optimal solution u t (T, ) : target function Z() α =, β = γ =,,

49 Figure 3.3 Optimal solution u and target W for t = /8 and = /4 : optimal solution u(t, ) : target function W () α =, β = γ = 8 t= 8.5 Figure 3.4 Optimal solution u t and target Z for t = /8 and = /4 : optimal solution u t (T, ) : target function Z() α =, β = γ =

50 t Figure 3.5 Optimal solution u for t = /8 and = /4 : optimal solution u(t, ) α =, β = γ =.5 t=.5 Figure 3.6 Optimal solution u and target W for t = /4 and = /4 : optimal solution u(t, ) : target function W () α =, β = γ =

51 46 8 t= 8 Figure 3.7 Optimal solution u t and target Z for t = /4 and = /4 : optimal solution u t (T, ) : target function Z() α =, β = γ =.5 t.5 Figure 3.8 Optimal solution u for t = /4 and = /4 : optimal solution u(t, ) α =, β = γ =

52 47 Eample (linear case and local domain control) Ψ(u) =, T =, Ω = [, ], Ω = [,.] [.9, ]. Suppose we have the same target functions as (3.3.2)..5 t= β=γ=.5.5 Figure 3.9 Optimal solution u for t = /8 and = /4 : optimal solution u(t, ) : target function W () α =, β = γ =

53 48 8 β=γ= t= β=γ= 8.5 Figure 3. Optimal solution u for t = /8 and = /4, : optimal solution u t (T, ) : target function Z() α =, β = γ =, Figure 3. Optimal solution u for t = /8 and = /4 : optimal solution u(t, ) : target function W () α =, β = γ =

54 49 8 t= 8.5 Figure 3.2 Optimal solution u for t = /8 and = /4 : optimal solution u t (T, ) : target function Z() α =, β = γ =.5.5 t Figure 3.3 Optimal solution u for t = /8 and = /4 : optimal solution u(t, ) α =, β = γ =

55 5 Eample (linear case and local domain control) Ψ(u) =, T =, Ω = [, ], Ω = [.45,.55]. Suppose we have the same target functions as (3.3.2)..5 t=.5.5 Figure 3.4 Optimal solution u for t = /8 and = /4 : optimal solution u(t, ) : target function W () α =, β = γ =

56 5 8 t= 8.5 Figure 3.5 Optimal solution u for t = /8 and = /4 : optimal solution u t (T, ) : target function Z() α =, β = γ = t Figure 3.6 Optimal solution u for t = /8 and = /4 : optimal solution u(t, ) α =, β = γ =

57 52 Eample (linear case and local domain control) Ψ(u) =, T = 2, Ω = [, ], Ω = [.45,.55]. Suppose we have the same target functions as (3.3.2)..5 t=2.5.5 Figure 3.7 Optimal solution u for t = /8 and = /4 : optimal solution u(t, ) : target function W () α =, β = γ =

58 53 8 t=2 8.5 Figure 3.8 Optimal solution u for t = /8 and = /4 : optimal solution u t (T, ) : target function Z() α =, β = γ = t 2 Figure 3.9 Optimal solution u for t = /8 and = /4 : optimal solution u(t, ) α =, β = γ =

59 54 Eample (linear case and local domain control) Ψ(u) =, T =, Ω = [, ], Ω = [.495,.55]. Suppose we have the same target functions as (3.3.2) Figure 3.2 Optimal solution u for t = /4 and = /4 : optimal solution u(t, ) : target function W () α =, β = γ =

Hyperbolic Systems of Conservation Laws. in One Space Dimension. II - Solutions to the Cauchy problem. Alberto Bressan

Hyperbolic Systems of Conservation Laws. in One Space Dimension. II - Solutions to the Cauchy problem. Alberto Bressan Hyperbolic Systems of Conservation Laws in One Space Dimension II - Solutions to the Cauchy problem Alberto Bressan Department of Mathematics, Penn State University http://www.math.psu.edu/bressan/ 1 Global

More information

= V I = Bus Admittance Matrix. Chapter 6: Power Flow. Constructing Ybus. Example. Network Solution. Triangular factorization. Let

= V I = Bus Admittance Matrix. Chapter 6: Power Flow. Constructing Ybus. Example. Network Solution. Triangular factorization. Let Chapter 6: Power Flow Network Matrices Network Solutions Newton-Raphson Method Fast Decoupled Method Bus Admittance Matri Let I = vector of currents injected into nodes V = vector of node voltages Y bus

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

Partitioned Methods for Multifield Problems

Partitioned Methods for Multifield Problems C Partitioned Methods for Multifield Problems Joachim Rang, 6.7.2016 6.7.2016 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 C One-dimensional piston problem fixed wall Fluid flexible

More information

US01CMTH02 UNIT-3. exists, then it is called the partial derivative of f with respect to y at (a, b) and is denoted by f. f(a, b + b) f(a, b) lim

US01CMTH02 UNIT-3. exists, then it is called the partial derivative of f with respect to y at (a, b) and is denoted by f. f(a, b + b) f(a, b) lim Study material of BSc(Semester - I US01CMTH02 (Partial Derivatives Prepared by Nilesh Y Patel Head,Mathematics Department,VPand RPTPScience College 1 Partial derivatives US01CMTH02 UNIT-3 The concept of

More information

Lecture 4.2 Finite Difference Approximation

Lecture 4.2 Finite Difference Approximation Lecture 4. Finite Difference Approimation 1 Discretization As stated in Lecture 1.0, there are three steps in numerically solving the differential equations. They are: 1. Discretization of the domain by

More information

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT MICHAEL LEWIS, AND VIRGINIA TORCZON Abstract. We present a new generating set search (GSS) approach

More information

Hyperbolic Systems of Conservation Laws. I - Basic Concepts

Hyperbolic Systems of Conservation Laws. I - Basic Concepts Hyperbolic Systems of Conservation Laws I - Basic Concepts Alberto Bressan Mathematics Department, Penn State University Alberto Bressan (Penn State) Hyperbolic Systems of Conservation Laws 1 / 27 The

More information

MATH 220 solution to homework 1

MATH 220 solution to homework 1 MATH solution to homework Problem. Define z(s = u( + s, y + s, then z (s = u u ( + s, y + s + y ( + s, y + s = e y, z( y = u( y, = f( y, u(, y = z( = z( y + y If we prescribe the data u(, = f(, then z

More information

A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws

A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws Zhengfu Xu, Jinchao Xu and Chi-Wang Shu 0th April 010 Abstract In this note, we apply the h-adaptive streamline

More information

1. INTRODUCTION 2. PROBLEM FORMULATION ROMAI J., 6, 2(2010), 1 13

1. INTRODUCTION 2. PROBLEM FORMULATION ROMAI J., 6, 2(2010), 1 13 Contents 1 A product formula approach to an inverse problem governed by nonlinear phase-field transition system. Case 1D Tommaso Benincasa, Costică Moroşanu 1 v ROMAI J., 6, 2(21), 1 13 A PRODUCT FORMULA

More information

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice 1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian

More information

Topology Optimization Formulations for Circuit Board Heat Spreader Design

Topology Optimization Formulations for Circuit Board Heat Spreader Design Topology Optimization Formulations for Circuit Board Heat Spreader Design Danny J. Lohan and James T. Allison University of Illinois at Urbana-Champaign, Urbana, Illinois, 68, USA Ercan M. Dede Toyota

More information

SECOND ORDER ALL SPEED METHOD FOR THE ISENTROPIC EULER EQUATIONS. Min Tang. (Communicated by the associate editor name)

SECOND ORDER ALL SPEED METHOD FOR THE ISENTROPIC EULER EQUATIONS. Min Tang. (Communicated by the associate editor name) Manuscript submitted to AIMS Journals Volume X, Number X, XX 2X Website: http://aimsciences.org pp. X XX SECOND ORDER ALL SPEED METHOD FOR THE ISENTROPIC EULER EQUATIONS Min Tang Department of Mathematics

More information

SOLUTIONS BY SUBSTITUTIONS

SOLUTIONS BY SUBSTITUTIONS 25 SOLUTIONS BY SUBSTITUTIONS 71 25 SOLUTIONS BY SUBSTITUTIONS REVIEW MATERIAL Techniques of integration Separation of variables Solution of linear DEs INTRODUCTION We usually solve a differential equation

More information

MATH 220: Problem Set 3 Solutions

MATH 220: Problem Set 3 Solutions MATH 220: Problem Set 3 Solutions Problem 1. Let ψ C() be given by: 0, < 1, 1 +, 1 < < 0, ψ() = 1, 0 < < 1, 0, > 1, so that it verifies ψ 0, ψ() = 0 if 1 and ψ()d = 1. Consider (ψ j ) j 1 constructed as

More information

2 Introduction T(,t) = = L Figure..: Geometry of the prismatical rod of Eample... T = u(,t) = L T Figure..2: Geometry of the taut string of Eample..2.

2 Introduction T(,t) = = L Figure..: Geometry of the prismatical rod of Eample... T = u(,t) = L T Figure..2: Geometry of the taut string of Eample..2. Chapter Introduction. Problems Leading to Partial Dierential Equations Many problems in science and engineering are modeled and analyzed using systems of partial dierential equations. Realistic models

More information

Quantum Dynamics. March 10, 2017

Quantum Dynamics. March 10, 2017 Quantum Dynamics March 0, 07 As in classical mechanics, time is a parameter in quantum mechanics. It is distinct from space in the sense that, while we have Hermitian operators, X, for position and therefore

More information

MOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1

MOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1 MOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1 OPTIONAL question 1. Below, several phase portraits are shown.

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

4 Inverse function theorem

4 Inverse function theorem Tel Aviv University, 2014/15 Analysis-III,IV 71 4 Inverse function theorem 4a What is the problem................ 71 4b Simple observations before the theorem..... 72 4c The theorem.....................

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

The Riemann problem and a high-resolution Godunov method for a model of compressible two-phase flow

The Riemann problem and a high-resolution Godunov method for a model of compressible two-phase flow The Riemann problem and a high-resolution Godunov method for a model of compressible two-phase flow D.W. Schwendeman, C.W. Wahle 2, and A.K. Kapila Department of Mathematical Sciences, Rensselaer Polytechnic

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

Numerical Analysis & Computer Programming

Numerical Analysis & Computer Programming ++++++++++ Numerical Analysis & Computer Programming Previous year Questions from 07 to 99 Ramanasri Institute W E B S I T E : M A T H E M A T I C S O P T I O N A L. C O M C O N T A C T : 8 7 5 0 7 0 6

More information

RATIO BASED PARALLEL TIME INTEGRATION (RAPTI) FOR INITIAL VALUE PROBLEMS

RATIO BASED PARALLEL TIME INTEGRATION (RAPTI) FOR INITIAL VALUE PROBLEMS RATIO BASED PARALLEL TIME INTEGRATION (RAPTI) FOR INITIAL VALUE PROBLEMS Nabil R. Nassif Noha Makhoul Karam Yeran Soukiassian Mathematics Department, American University of Beirut IRISA, Campus Beaulieu,

More information

Application of modified Leja sequences to polynomial interpolation

Application of modified Leja sequences to polynomial interpolation Special Issue for the years of the Padua points, Volume 8 5 Pages 66 74 Application of modified Leja sequences to polynomial interpolation L. P. Bos a M. Caliari a Abstract We discuss several applications

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

In particular, if the initial condition is positive, then the entropy solution should satisfy the following positivity-preserving property

In particular, if the initial condition is positive, then the entropy solution should satisfy the following positivity-preserving property 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 17 18 19 1 3 4 IMPLICIT POSITIVITY-PRESERVING HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR CONSERVATION LAWS TONG QIN AND CHI-WANG SHU Abstract. Positivity-preserving

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach. Thesis by Sharefa Mohammad Asiri

An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach. Thesis by Sharefa Mohammad Asiri An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach Thesis by Sharefa Mohammad Asiri In Partial Fulfillment of the Requirements For the Degree of Masters of Science

More information

BIHARMONIC WAVE MAPS INTO SPHERES

BIHARMONIC WAVE MAPS INTO SPHERES BIHARMONIC WAVE MAPS INTO SPHERES SEBASTIAN HERR, TOBIAS LAMM, AND ROLAND SCHNAUBELT Abstract. A global weak solution of the biharmonic wave map equation in the energy space for spherical targets is constructed.

More information

FLOW AROUND A SYMMETRIC OBSTACLE

FLOW AROUND A SYMMETRIC OBSTACLE FLOW AROUND A SYMMETRIC OBSTACLE JOEL A. TROPP Abstract. In this article, we apply Schauder s fied point theorem to demonstrate the eistence of a solution to a certain integral equation. This solution

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

Finite Difference Methods for Boundary Value Problems

Finite Difference Methods for Boundary Value Problems Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point

More information

Chapter 9. Derivatives. Josef Leydold Mathematical Methods WS 2018/19 9 Derivatives 1 / 51. f x. (x 0, f (x 0 ))

Chapter 9. Derivatives. Josef Leydold Mathematical Methods WS 2018/19 9 Derivatives 1 / 51. f x. (x 0, f (x 0 )) Chapter 9 Derivatives Josef Leydold Mathematical Methods WS 208/9 9 Derivatives / 5 Difference Quotient Let f : R R be some function. The the ratio f = f ( 0 + ) f ( 0 ) = f ( 0) 0 is called difference

More information

PROBLEMS In each of Problems 1 through 12:

PROBLEMS In each of Problems 1 through 12: 6.5 Impulse Functions 33 which is the formal solution of the given problem. It is also possible to write y in the form 0, t < 5, y = 5 e (t 5/ sin 5 (t 5, t 5. ( The graph of Eq. ( is shown in Figure 6.5.3.

More information

Grid Generation and Applications

Grid Generation and Applications Department of Mathematics Vianey Villamizar Math 511 Grid Generation and Applications Villamizar Vianey Colloquium 2004 BYU Department of Mathematics Transformation of a Simply Connected Domain η Prototypical

More information

Code: 101MAT4 101MT4B. Today s topics Finite-difference method in 2D. Poisson equation Wave equation

Code: 101MAT4 101MT4B. Today s topics Finite-difference method in 2D. Poisson equation Wave equation Code: MAT MTB Today s topics Finite-difference method in D Poisson equation Wave equation Finite-difference method for elliptic PDEs in D Recall that u is the short version of u x + u y Dirichlet BVP:

More information

Research Article The Spectral Method for Solving Sine-Gordon Equation Using a New Orthogonal Polynomial

Research Article The Spectral Method for Solving Sine-Gordon Equation Using a New Orthogonal Polynomial International Scholarly Research Network ISRN Applied Mathematics Volume, Article ID 4673, pages doi:.54//4673 Research Article The Spectral Method for Solving Sine-Gordon Equation Using a New Orthogonal

More information

Power EP. Thomas Minka Microsoft Research Ltd., Cambridge, UK MSR-TR , October 4, Abstract

Power EP. Thomas Minka Microsoft Research Ltd., Cambridge, UK MSR-TR , October 4, Abstract Power EP Thomas Minka Microsoft Research Ltd., Cambridge, UK MSR-TR-2004-149, October 4, 2004 Abstract This note describes power EP, an etension of Epectation Propagation (EP) that makes the computations

More information

Final Exam May 4, 2016

Final Exam May 4, 2016 1 Math 425 / AMCS 525 Dr. DeTurck Final Exam May 4, 2016 You may use your book and notes on this exam. Show your work in the exam book. Work only the problems that correspond to the section that you prepared.

More information

Practice Midterm Solutions

Practice Midterm Solutions Practice Midterm Solutions Math 4B: Ordinary Differential Equations Winter 20 University of California, Santa Barbara TA: Victoria Kala DO NOT LOOK AT THESE SOLUTIONS UNTIL YOU HAVE ATTEMPTED EVERY PROBLEM

More information

M445: Heat equation with sources

M445: Heat equation with sources M5: Heat equation with sources David Gurarie I. On Fourier and Newton s cooling laws The Newton s law claims the temperature rate to be proportional to the di erence: d dt T = (T T ) () The Fourier law

More information

Quiescent Steady State (DC) Analysis The Newton-Raphson Method

Quiescent Steady State (DC) Analysis The Newton-Raphson Method Quiescent Steady State (DC) Analysis The Newton-Raphson Method J. Roychowdhury, University of California at Berkeley Slide 1 Solving the System's DAEs DAEs: many types of solutions useful DC steady state:

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

A Discontinuous Galerkin Moving Mesh Method for Hamilton-Jacobi Equations

A Discontinuous Galerkin Moving Mesh Method for Hamilton-Jacobi Equations Int. Conference on Boundary and Interior Layers BAIL 6 G. Lube, G. Rapin Eds c University of Göttingen, Germany, 6 A Discontinuous Galerkin Moving Mesh Method for Hamilton-Jacobi Equations. Introduction

More information

Final: Solutions Math 118A, Fall 2013

Final: Solutions Math 118A, Fall 2013 Final: Solutions Math 118A, Fall 2013 1. [20 pts] For each of the following PDEs for u(x, y), give their order and say if they are nonlinear or linear. If they are linear, say if they are homogeneous or

More information

Rates of Convergence to Self-Similar Solutions of Burgers Equation

Rates of Convergence to Self-Similar Solutions of Burgers Equation Rates of Convergence to Self-Similar Solutions of Burgers Equation by Joel Miller Andrew Bernoff, Advisor Advisor: Committee Member: May 2 Department of Mathematics Abstract Rates of Convergence to Self-Similar

More information

H. L. Atkins* NASA Langley Research Center. Hampton, VA either limiters or added dissipation when applied to

H. L. Atkins* NASA Langley Research Center. Hampton, VA either limiters or added dissipation when applied to Local Analysis of Shock Capturing Using Discontinuous Galerkin Methodology H. L. Atkins* NASA Langley Research Center Hampton, A 68- Abstract The compact form of the discontinuous Galerkin method allows

More information

Calculus of Variation An Introduction To Isoperimetric Problems

Calculus of Variation An Introduction To Isoperimetric Problems Calculus of Variation An Introduction To Isoperimetric Problems Kevin Wang The University of Sydney SSP Working Seminars, MATH2916 May 4, 2013 Contents I Lagrange Multipliers 2 1 Single Constraint Lagrange

More information

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Jan Mandel University of Colorado Denver May 12, 2010 1/20/09: Sec. 1.1, 1.2. Hw 1 due 1/27: problems

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics Introductory Notes in Ordinary Differential Equations for Physical Sciences and Engineering Marcel B. Finan c All Rights

More information

Chapter 2 Distributed Parameter Systems: Controllability, Observability, and Identification

Chapter 2 Distributed Parameter Systems: Controllability, Observability, and Identification Chapter 2 Distributed Parameter Systems: Controllability, Observability, and Identification 2.1 Mathematical Description We introduce the class of systems to be considered in the framework of this monograph

More information

Computation fundamentals of discrete GMRF representations of continuous domain spatial models

Computation fundamentals of discrete GMRF representations of continuous domain spatial models Computation fundamentals of discrete GMRF representations of continuous domain spatial models Finn Lindgren September 23 2015 v0.2.2 Abstract The fundamental formulas and algorithms for Bayesian spatial

More information

Vibrating Strings and Heat Flow

Vibrating Strings and Heat Flow Vibrating Strings and Heat Flow Consider an infinite vibrating string Assume that the -ais is the equilibrium position of the string and that the tension in the string at rest in equilibrium is τ Let u(,

More information

A Primer on Multidimensional Optimization

A Primer on Multidimensional Optimization A Primer on Multidimensional Optimization Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Eercise

More information

PDE Constrained Optimization selected Proofs

PDE Constrained Optimization selected Proofs PDE Constrained Optimization selected Proofs Jeff Snider jeff@snider.com George Mason University Department of Mathematical Sciences April, 214 Outline 1 Prelim 2 Thms 3.9 3.11 3 Thm 3.12 4 Thm 3.13 5

More information

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Technische Universität Berlin Martin Gubisch, Stefan Volkwein University of Konstanz March, 3 Martin Gubisch,

More information

Numerical computation of an optimal control problem with homogenization in one-dimensional case

Numerical computation of an optimal control problem with homogenization in one-dimensional case Retrospective Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 28 Numerical computation of an optimal control problem with homogenization in one-dimensional case Zhen

More information

Eigenvalues of Trusses and Beams Using the Accurate Element Method

Eigenvalues of Trusses and Beams Using the Accurate Element Method Eigenvalues of russes and Beams Using the Accurate Element Method Maty Blumenfeld Department of Strength of Materials Universitatea Politehnica Bucharest, Romania Paul Cizmas Department of Aerospace Engineering

More information

Research Article On the Numerical Solution of Differential-Algebraic Equations with Hessenberg Index-3

Research Article On the Numerical Solution of Differential-Algebraic Equations with Hessenberg Index-3 Discrete Dynamics in Nature and Society Volume, Article ID 474, pages doi:.55//474 Research Article On the Numerical Solution of Differential-Algebraic Equations with Hessenberg Inde- Melike Karta and

More information

Effects of a Saturating Dissipation in Burgers-Type Equations

Effects of a Saturating Dissipation in Burgers-Type Equations Effects of a Saturating Dissipation in Burgers-Type Equations A. KURGANOV AND P. ROSENAU Tel-Aviv University Abstract We propose and study a new variant of the Burgers equation with dissipation flues that

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 3 3.4 Differential Algebraic Systems 3.5 Integration of Differential Equations 1 Outline 3.4 Differential Algebraic Systems 3.4.1 Constrained Dynamics 3.4.2 First and Second

More information

Introduction to Partial Differential Equations

Introduction to Partial Differential Equations Introduction to Partial Differential Equations Philippe B. Laval KSU Current Semester Philippe B. Laval (KSU) Key Concepts Current Semester 1 / 25 Introduction The purpose of this section is to define

More information

Lecture 23: Conditional Gradient Method

Lecture 23: Conditional Gradient Method 10-725/36-725: Conve Optimization Spring 2015 Lecture 23: Conditional Gradient Method Lecturer: Ryan Tibshirani Scribes: Shichao Yang,Diyi Yang,Zhanpeng Fang Note: LaTeX template courtesy of UC Berkeley

More information

SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS

SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS STRUCTURE OF EXAMINATION PAPER. There will be one -hour paper consisting of 4 questions..

More information

Wave breaking in the short-pulse equation

Wave breaking in the short-pulse equation Dynamics of PDE Vol.6 No.4 29-3 29 Wave breaking in the short-pulse equation Yue Liu Dmitry Pelinovsky and Anton Sakovich Communicated by Y. Charles Li received May 27 29. Abstract. Sufficient conditions

More information

Lecture Introduction

Lecture Introduction Lecture 1 1.1 Introduction The theory of Partial Differential Equations (PDEs) is central to mathematics, both pure and applied. The main difference between the theory of PDEs and the theory of Ordinary

More information

Mathematical Model of Forced Van Der Pol s Equation

Mathematical Model of Forced Van Der Pol s Equation Mathematical Model of Forced Van Der Pol s Equation TO Tsz Lok Wallace LEE Tsz Hong Homer December 9, Abstract This work is going to analyze the Forced Van Der Pol s Equation which is used to analyze the

More information

Parallelizing large scale time domain electromagnetic inverse problem

Parallelizing large scale time domain electromagnetic inverse problem Parallelizing large scale time domain electromagnetic inverse problems Eldad Haber with: D. Oldenburg & R. Shekhtman + Emory University, Atlanta, GA + The University of British Columbia, Vancouver, BC,

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Part IV: Numerical schemes for the phase-filed model

Part IV: Numerical schemes for the phase-filed model Part IV: Numerical schemes for the phase-filed model Jie Shen Department of Mathematics Purdue University IMS, Singapore July 29-3, 29 The complete set of governing equations Find u, p, (φ, ξ) such that

More information

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates. LEGENDRE POLYNOMIALS AND APPLICATIONS We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.. Legendre equation: series solutions The Legendre equation is

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

HIGH-ORDER ACCURATE METHODS BASED ON DIFFERENCE POTENTIALS FOR 2D PARABOLIC INTERFACE MODELS

HIGH-ORDER ACCURATE METHODS BASED ON DIFFERENCE POTENTIALS FOR 2D PARABOLIC INTERFACE MODELS HIGH-ORDER ACCURATE METHODS BASED ON DIFFERENCE POTENTIALS FOR 2D PARABOLIC INTERFACE MODELS JASON ALBRIGHT, YEKATERINA EPSHTEYN, AND QING XIA Abstract. Highly-accurate numerical methods that can efficiently

More information

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract

More information

Research Article Quasilinearization Technique for Φ-Laplacian Type Equations

Research Article Quasilinearization Technique for Φ-Laplacian Type Equations International Mathematics and Mathematical Sciences Volume 0, Article ID 975760, pages doi:0.55/0/975760 Research Article Quasilinearization Technique for Φ-Laplacian Type Equations Inara Yermachenko and

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

HIGH-ORDER ACCURATE METHODS BASED ON DIFFERENCE POTENTIALS FOR 2D PARABOLIC INTERFACE MODELS

HIGH-ORDER ACCURATE METHODS BASED ON DIFFERENCE POTENTIALS FOR 2D PARABOLIC INTERFACE MODELS HIGH-ORDER ACCURATE METHODS BASED ON DIFFERENCE POTENTIALS FOR 2D PARABOLIC INTERFACE MODELS JASON ALBRIGHT, YEKATERINA EPSHTEYN, AND QING XIA Abstract. Highly-accurate numerical methods that can efficiently

More information

The Scalar Conservation Law

The Scalar Conservation Law The Scalar Conservation Law t + f() = 0 = conserved qantity, f() =fl d dt Z b a (t, ) d = Z b a t (t, ) d = Z b a f (t, ) d = f (t, a) f (t, b) = [inflow at a] [otflow at b] f((a)) f((b)) a b Alberto Bressan

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

3. Fundamentals of Lyapunov Theory

3. Fundamentals of Lyapunov Theory Applied Nonlinear Control Nguyen an ien -.. Fundamentals of Lyapunov heory he objective of this chapter is to present Lyapunov stability theorem and illustrate its use in the analysis and the design of

More information

Algebraic flux correction and its application to convection-dominated flow. Matthias Möller

Algebraic flux correction and its application to convection-dominated flow. Matthias Möller Algebraic flux correction and its application to convection-dominated flow Matthias Möller matthias.moeller@math.uni-dortmund.de Institute of Applied Mathematics (LS III) University of Dortmund, Germany

More information

The Milne error estimator for stiff problems

The Milne error estimator for stiff problems 13 R. Tshelametse / SAJPAM. Volume 4 (2009) 13-28 The Milne error estimator for stiff problems Ronald Tshelametse Department of Mathematics University of Botswana Private Bag 0022 Gaborone, Botswana. E-mail

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

Classical Numerical Methods to Solve Optimal Control Problems

Classical Numerical Methods to Solve Optimal Control Problems Lecture 26 Classical Numerical Methods to Solve Optimal Control Problems Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Necessary Conditions

More information

Chapter 12: Iterative Methods

Chapter 12: Iterative Methods ES 40: Scientific and Engineering Computation. Uchechukwu Ofoegbu Temple University Chapter : Iterative Methods ES 40: Scientific and Engineering Computation. Gauss-Seidel Method The Gauss-Seidel method

More information

EXACT EQUATIONS AND INTEGRATING FACTORS

EXACT EQUATIONS AND INTEGRATING FACTORS MAP- EXACT EQUATIONS AND INTEGRATING FACTORS First-order Differential Equations for Which We Can Find Eact Solutions Stu the patterns carefully. The first step of any solution is correct identification

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

DYNAMICS OF SERIAL ROBOTIC MANIPULATORS

DYNAMICS OF SERIAL ROBOTIC MANIPULATORS DYNAMICS OF SERIAL ROBOTIC MANIPULATORS NOMENCLATURE AND BASIC DEFINITION We consider here a mechanical system composed of r rigid bodies and denote: M i 6x6 inertia dyads of the ith body. Wi 6 x 6 angular-velocity

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Numerical Approximation of Phase Field Models

Numerical Approximation of Phase Field Models Numerical Approximation of Phase Field Models Lecture 2: Allen Cahn and Cahn Hilliard Equations with Smooth Potentials Robert Nürnberg Department of Mathematics Imperial College London TUM Summer School

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

IMPACTS OF SIGMA COORDINATES ON THE EULER AND NAVIER-STOKES EQUATIONS USING CONTINUOUS/DISCONTINUOUS GALERKIN METHODS

IMPACTS OF SIGMA COORDINATES ON THE EULER AND NAVIER-STOKES EQUATIONS USING CONTINUOUS/DISCONTINUOUS GALERKIN METHODS Approved for public release; distribution is unlimited IMPACTS OF SIGMA COORDINATES ON THE EULER AND NAVIER-STOKES EQUATIONS USING CONTINUOUS/DISCONTINUOUS GALERKIN METHODS Sean L. Gibbons Captain, United

More information