Multigrid Method for Elliptic Control Problems

Similar documents
Numerical methods for PDEs FEM - abstract formulation, the Galerkin method

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT

Smoothers for ecient multigrid methods in IGA

Numerische Mathematik

CS229 Lecture notes. Andrew Ng

Integrating Factor Methods as Exponential Integrators

C. Fourier Sine Series Overview

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

A nonlinear multigrid for imaging electrical conductivity and permittivity at low frequency

The Group Structure on a Smooth Tropical Cubic

4 Separation of Variables

A Robust Multigrid Method for Isogeometric Analysis using Boundary Correction. C. Hofreither, S. Takacs, W. Zulehner. G+S Report No.

Math 124B January 31, 2012

4 1-D Boundary Value Problems Heat Equation

Lecture Note 3: Stationary Iterative Methods

Numerical methods for elliptic partial differential equations Arnold Reusken

Lecture 6: Moderately Large Deflection Theory of Beams

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues

14 Separation of Variables Method

THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Higher dimensional PDEs and multidimensional eigenvalue problems

6 Wave Equation on an Interval: Separation of Variables

Substructuring Preconditioners for the Bidomain Extracellular Potential Problem

Further analysis of multilevel Monte Carlo methods for elliptic PDEs with random coefficients

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

c 1999 Society for Industrial and Applied Mathematics

$, (2.1) n="# #. (2.2)

B-SPLINE-BASED MONOTONE MULTIGRID METHODS

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems

Problem set 6 The Perron Frobenius theorem.

Introduction to PDEs and Numerical Methods Tutorial 10. Finite Element Analysis

Wavelet Galerkin Solution for Boundary Value Problems

M. Aurada 1,M.Feischl 1, J. Kemetmüller 1,M.Page 1 and D. Praetorius 1

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

CONGRUENCES. 1. History

SEMINAR 2. PENDULUMS. V = mgl cos θ. (2) L = T V = 1 2 ml2 θ2 + mgl cos θ, (3) d dt ml2 θ2 + mgl sin θ = 0, (4) θ + g l

BALANCING REGULAR MATRIX PENCILS

An approximate method for solving the inverse scattering problem with fixed-energy data

A Brief Introduction to Markov Chains and Hidden Markov Models

Nonlinear Analysis of Spatial Trusses

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

PHYS 110B - HW #1 Fall 2005, Solutions by David Pace Equations referenced as Eq. # are from Griffiths Problem statements are paraphrased

QUANTITATIVE ANALYSIS OF FINITE-DIFFERENCE APPROXIMATIONS OF FREE-DISCONTINUITY PROBLEMS

B-Spline-Based Monotone Multigrid Methods. Markus Holtz, Angela Kunoth. no. 252

BCCS TECHNICAL REPORT SERIES

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS

arxiv: v4 [math.na] 25 Aug 2014

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS

Physics 235 Chapter 8. Chapter 8 Central-Force Motion

Week 6 Lectures, Math 6451, Tanveer

Efficient numerical solution of Neumann problems on complicated domains. Received: July 2005 / Revised version: April 2006 Springer-Verlag 2006

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

Introduction. Figure 1 W8LC Line Array, box and horn element. Highlighted section modelled.

Coupling of LWR and phase transition models at boundary

Approximated MLC shape matrix decomposition with interleaf collision constraint

Multilayer Kerceptron

2M2. Fourier Series Prof Bill Lionheart

Math 124B January 17, 2012

A. Distribution of the test statistic

A two-level Schwarz preconditioner for heterogeneous problems

Indirect Optimal Control of Dynamical Systems

u(x) s.t. px w x 0 Denote the solution to this problem by ˆx(p, x). In order to obtain ˆx we may simply solve the standard problem max x 0

Local defect correction for time-dependent problems

Bourgain s Theorem. Computational and Metric Geometry. Instructor: Yury Makarychev. d(s 1, s 2 ).

c 2011 Society for Industrial and Applied Mathematics BOOTSTRAP AMG

CASCADIC MULTILEVEL METHODS FOR FAST NONSYMMETRIC BLUR- AND NOISE-REMOVAL. Dedicated to Richard S. Varga on the occasion of his 80th birthday.

Partial permutation decoding for MacDonald codes

FOURIER SERIES ON ANY INTERVAL

Explicit overall risk minimization transductive bound

A FINITE ELEMENT METHOD WITH LAGRANGE MULTIPLIERS FOR LOW-FREQUENCY HARMONIC MAXWELL EQUATIONS

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf

Mat 1501 lecture notes, penultimate installment

Chapter 5. Wave equation. 5.1 Physical derivation

Section 6: Magnetostatics

General Decay of Solutions in a Viscoelastic Equation with Nonlinear Localized Damping

arxiv: v1 [math.ca] 6 Mar 2017

Generalized Rigid and Generalized Affine Image Registration and Interpolation by Geometric Multigrid

Legendre Polynomials - Lecture 8

Research Article Numerical Range of Two Operators in Semi-Inner Product Spaces

Preprint BUW-SC 11/1. Bergische Universität Wuppertal. Fachbereich C Mathematik und Naturwissenschaften. Mathematik

Homogeneity properties of subadditive functions

arxiv: v1 [math.co] 12 May 2013

Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 2, 3, and 4, and Di Bartolo, Chap. 2. 2π nx i a. ( ) = G n.

P-MULTIGRID PRECONDITIONERS APPLIED TO HIGH-ORDER DG AND HDG DISCRETIZATIONS

CS 331: Artificial Intelligence Propositional Logic 2. Review of Last Time

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

An extension of the MAC scheme to locally refined meshes : convergence analysis for the full tensor time dependent Navier Stokes equations

Unconditional security of differential phase shift quantum key distribution

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION

Analysis of Emerson s Multiple Model Interpolation Estimation Algorithms: The MIMO Case

Distributed average consensus: Beyond the realm of linearity

On a geometrical approach in contact mechanics

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 15, NO. 2, FEBRUARY

Approximated MLC shape matrix decomposition with interleaf collision constraint

Reflection principles and kernels in R n _+ for the biharmonic and Stokes operators. Solutions in a large class of weighted Sobolev spaces

Lecture 17 - The Secrets we have Swept Under the Rug

Transcription:

J OHANNES KEPLER UNIVERSITÄT LINZ Netzwerk f ür Forschung, L ehre und Praxis Mutigrid Method for Eiptic Contro Probems MASTERARBEIT zur Erangung des akademischen Grades MASTER OF SCIENCE in der Studienrichtung INDUSTRIAL MATHEMATICS Angefertigt am Institut für Numerische Mathematik Betreuung: O. Univ. Prof. Dip. Ing. Dr. Hemut Gfrerer Eingereicht von: Muzhinji Kizito Linz, August 2008 Johannes Keper Universität A-4040 Linz Atenbergerstraße 69 Internet: http://www.jku.at DVR 0093696

This dissertation is dedicated to my wife Wachenuka, our two girs Nyashadzashe and Tapiwanashe, my parents Mr and Mrs Muzhinji, brothers and Sisters, Dr and Mrs Unganai for their support and encouragement. A specia dedication goes to my daughter, Tapiwanashe, she was born during the beginning of my two year study and I wi see her for the first time when she is aready two years od.

Contents Acknowedgements... Abstract.... iv v 1 Introduction 2 2 ELLIPTIC CONTROL PROBLEMS 5 2.1 Distributed Contro Probems....................... 7 2.1.1 Mode Probem........................... 7 2.1.2 First Order Optimaity Condition................. 8 2.1.3 Lagrange Principe......................... 9 2.2 Boundary Contro Probem......................... 11 2.2.1 Neumann Boundary Contro Probem............... 11 2.2.2 Dirichet Boundary Contro Probem............... 12 2.2.3 Existence and Uniqueness of the Function Minimum....... 13 3 DISCRETIZATION 15 3.1 Variationa Formuation.......................... 15 3.2 Finite Eement Method........................... 17 3.2.1 Formuation of the Finite Eement Method............ 17 3.2.2 Finite Eements........................... 20 3.2.3 Assembing of Matrices....................... 22 3.2.4 Approximation Properties..................... 24 3.3 Discrete Optimaity System........................ 27 3.3.1 Integra Equation Characterizing the Optima Contro..... 29 ii

4 Soution of the Discretized Optima Contro System 31 4.1 Mutigrid Agorithm............................ 31 4.2 Convergence of the Mutigrid Method................... 35 5 Numerica Resuts 42 5.1 Test Exampe 1............................... 44 5.2 Exampe 2.................................. 49 5.3 Exampe 3.................................. 49 5.4 Concusion.................................. 52 iii

Acknowedgments This work presented so many new chaenges during the inceptiona and impementationa phases. Without the support, patience and guidance of the foowing peope, it woud have taken years to be competed. I owe my profound gratitude to the foowing: 1. Professor Hemut Gfrerer who undertook to act as my supervisor despite his various commitments. Without his support, patience and guidance this work coud have been hardy a success. 2. My wife, Wachenuka and our two daughters, Nyashadzashe and Tapiwanashe for aowing me to be away from them for two years of study. My parents Mr and Mrs Muzhinji who aways supported, encouraged and prayed for my studies. Above a, a Big Thanks to Dr and Mrs Unganai, without them I coud have faied to make it for my studies. 3. My Lecturers both at TU Kaisersautern(Germany) and Johannes Keper Linz University(Austria) for providing a the necessary knowedge during the course of my studies. 4. A my friends who were aways with and inspired me in this programme. Finay, I woud ike acknowedge the European Union for financing my studies through the Erasmus Mundus Schoarship. Muzhinji Kizito Linz, August 2008 iv

Abstract In this work we studied eiptic contro probem. It is an optimization probem that consists of the cost functiona to be minimized subject to constraints governed by eiptic differentia equations with a Neumann boundary condition. We transformed the optimization probem to the optimaity system which was characterised in the form of the integra equation on which the mutigrid method is appied to find an optima contro. The major goa is to find the optima contro. We achieve this by computing the distributed contro probem using finite eement method with piecewise inear functions. The domain was partitioned by reguar trianguation. The existence and uniqueness of the soutions to the optima contro and the discrete optima probem is studied and error estimates obtained. Different exampes were considered. The convergence of the mutigrid method is anaysed and the numerica resuts agree with the theoretica caims. v

List of Figures Figure Description Page 2.1 Mode Domain Ω 6 3.1 Initia Mesh and Refinements of Ω 19 3.2 Trianguation of Ω 20 5.1 Exampe of Refinement Leves 43 5.2 Exampe 1: Target and Exact Soution 45 5.3 Behaviour of the L2-error 47 5.4 Snapshot of the optima contro at eve 4 48 5.5 Desired/Target State at eve 4 50 5.6 Exampe 3: Initia Contro and Approximate Contro 52 vi

List of Tabes Tabe Description Page 5.1 Refinement Leves and Number of Nodes 44 5.2 Convergence resuts, Error at eve 4 46 5.3 Convergence resuts, Error at eve 5 46 5.4 Convergence Resuts of Different Leves 47 5.5 Convergence Resuts for Different Weighting Parameter δ 48 5.6 Exampe 2: Convergence Resuts at eve 4 49 5.7 Exampe 2: Convergence Resuts at eve 5 49 5.8 Exampe 3: Contro Vaues at Different eves 50 5.9 Exampe 3: Convergence Resuts at eve 4 51 5.10 Exampe 3: Convergence Resuts at eve 5 51 vii

List of Notation and Symbos Symbo Description Ω Soution Domain, Ω R 2 Ω Γ = Ω H m (Ω) Discrete domain Ω Ω Boundary of Ω { v L 2 (Ω) : α, α m, α v L 2 (Ω) } y, y Continuous and discrete state soution p Adjoint variabe u, u Continuous and discrete contro soutions y d Desired/target state A, A Eiptic differentia operators and adjoints A, N Stiffness and mass matrices at grid eve K, K Continuous and discrete integra operators r,, p 1, MGM The restriction and the proongation operators Mutigrid Method 1

Chapter 1 Introduction An optima contro probem consists of a governing system, a description of the contro mechanism, and a criterion defining the cost functiona, that modes the purpose of the contro and describes the cost of its action. The system is governed by the eiptic partia differentia equations in our case. The formuation of an optima contro probem invoves the cost functiona to minimize under the constraint given by the modeing equations. The necessary conditions for such a minimum resut in a set of couped equations caed the optimaity system. In this and in the next section we describe a contro- and a state-constrained optima contro probems. The main thrust of this work is to appy the mutigrid method to sove for the constrained optimization probem governed by the partia differentia equations. The mutigrid method is appied to find the optima contro which is the minimiser of the cost functiona. The mutigrid method has been shown to be very efficient and successfu in soving eiptic contro probems(hachbusch, Borzi). The first step is to transform the optimization probem to the first order optimaity system then characterize the first order necessary optima condition by an integra equation on which the mutigrid method is deveoped, anayzed and finay numericay impemented. The key features of the mutigrid method are smoothing and coarse grid correction that invoves the inter-grid transfers and a soution correction step. The main resuts of the work are the convergence of the mutigrid method in cacuating the contro variabe. We make numerica experiments for the eiptic contro probem min (y,u) (Y U) J(y, u) = 1 2 y y d 2 L 2 (Ω) + δ 2 u 2 L 2 (Ω) (1.1) subject to y + y = f + u in Ω y n = g on Ω 2

The Lagrangian Principe is appied to the eiptic contro probem to get the optima contro system(2.13). With u we denote the contro function beonging to a set of admissibe contros U ad U, where U is a rea Hibert space. The state of the system is a function of the contro denoted by y(u) Y, where Y is a rea Hibert space. The optima contro u together with the corresponding state variabe y and the co-state/adjoint variabe p is the soution of the system y + y = f + u y n = g p + p = y y d p n = 0 u = 1 δ p in Ω on Ω in Ω on Ω in Ω Where y(u) is the soution of the state eiptic partia differentia equation corresponding to the contro u. p(u) is the soution of the adjoint eiptic partia differentia equation corresponding to the state y(u). y d is the desired/target state. The state and adjoint soutions depend on the contro function u. The contro may be invoved by the differentia equation as above or by the boundary condition. The couped system is decouped into an integra equation form(chapter 2 and chapter 3). The decouping resuts into a singe equation which is not more expensive to sove than the system of eiptic partia differentia equations if the fast sover is appied Hachbusch([6], [7]). The optima contro is the soution of the equation of the form (I K)u = q (1.2) where K is the integra operator(3.34) I is the identity operator q is the right hand side invoving the y d, f and g. 3

The idea of soving the integra equation characterisation of the optimaity system using the mutigrid method was used by Hachbusch ( [6] and [7]). The optima system is discretized using the finite eement method. The integra equation repaces the system of eiptic equations on which the mutigrid method is appied. The soution of the resuting discrete integra equation is presented in (chapter 5). The two grid method which is the basis of the mutigrid is eaborated. The mutigrid method agorithm is formuated based on the appication of the two grid recursivey. The convergence of the mutigrid is aso anayzed with the main resuts. In chapter 2 the notion, notations and the severa exampes of contro probems are presented. The finite eement discretization which is vita for the assembing of matrices is reviewed in chapter 3 and the characterization of the optima contro by the integra equation incuding the discretization of the operator. Chapter 4 contains the description of the mutigrid method and its properties. The discussion of the numerica soution is presented in chapter 5. 4

Chapter 2 ELLIPTIC CONTROL PROBLEMS The formuation of optima contro of systems governed by eiptic partia differentia equations requires the foowing terms: The definition of a contro function u that represents the driving infuence of the environment on the system. The eiptic partia differentia equations modeing the controed system, represented by the state function y(u). The cost functiona which modes the purpose of the contro on the system. This work is done on an open domain Ω R 2. The domain Ω = (0, 1) 2 with boundary Ω = Γ as shown on the figure beow. 5

Figure 2.1: Mode Domain Ω R 2 with boundary Ω = Γ. The contro can be defined in the foowing ways: Definition 2.1. With reference to the domain figure 2.1 define 1. Distributed Contro- if the contro is defined on the whoe or in some parts of the inside of the domain. 2. Boundary Contro- if the contro is defined on the whoe or parts of the boundary. The exampes of the eiptic differentia equations for the distributed and boundary contro probems are as foows: Distributed Contro: Find the state y(u) so that for the given contro u U the foowing equation is satisfied: Ay(u) = f + u in Ω By(u) = g on Ω (2.1) Boundary Contro: Find the state y(u) so that for the given contro u U the foowing equation is satisfied: Ay(u) = f in Ω By(u) = u + g on Ω (2.2) This means that y(u) is the soution of (2.1) or (2.2) for a given contro u either in the whoe domain or on the boundary. A fu description of the exampes of the distributed and boundary contro is given beow. Some important notations that are used in the descriptions are given beow. 6

Let A be a differentia operator of the second order for exampe A = ( + I) and A is the adjoint of the operator A. Let B and C be boundary operators of the first order with smooth coefficients such that the Green s formua hods. Ay, p L 2 (Ω) y, A p L 2 (Ω) = y, Cp L 2 (Γ) By, p L 2 (Γ) (2.3) Let U be the inear space of contro functions that are either distributed(defined on Ω) or boundary(defined on Γ). If U is defined on Γ we have boundary contro probem for exampe the optima temperature distribution otherwise we have a distributed contro probem ike the optima heat source distributed on the whoe domain. Define the set of admissibe contros U ad U. In our case we assume that there are no constraints on the contro, then U ad = U. The space of the state, contros and the adjoint are defined on the Soboev spaces order k, H k, H k 0 which is the cosure of a smooth functions with compact support in Ω(chapter 3) 2.1 Distributed Contro Probems This section invoves the study of the exampes of the distributed contro probems with the boundary condition either Neumann or Dirichet type. In this case the contro is distributed on the whoe or on parts of the whoe domain Ω. The first order necessary conditions wi expicity be given for the distributed contro probems. 2.1.1 Mode Probem The goa is to find the contro functions in the domain such that the objective functiona is minimized. For this the optimisation probem is defined as: which has to be minimized J(y, u) := 1 2 y y d 2 L 2 (Ω) +δ 2 u 2 L 2 (Ω) (2.4) subject to the constraints Ay(u) = f + u in Ω By(u) = g on Ω (2.5) with 7

f L 2 (Ω), g H 1 2 fixed and u U ad = L 2 (Ω) varies. The goa is to get y(u) y d for a given function y d L 2 (Ω) with the contro as sma as possibe. The equation (2.5) is caed the state equation with the variabe y(u) caed the state and the y d is the desired/target state. δ > 0 is the weighting parameter of the cost functiona. Since for a given contro u we can find the corresponding state y(u), then we can define the contro to state mapping S : U Y, Su := y(u) exists and is continuousy differentiabe such that the new cost functiona wi be defined as F(u) = J(y(u), u) (2.6) = J(Su, u) (2.7) This means that S is the soution operator. The new cost functiona for the optimization probem is now defined as which has to be minimized F(u) := 1 2 Su y d 2 L 2 (Ω) +δ 2 u 2 L 2 (Ω) (2.8) 2.1.2 First Order Optimaity Condition Consider the objective function (2.8). The first order optimaity condition is given by the theorem beow Theorem 2.2. The contro u U ad is optima if and ony if u U ad and v satisfies the variationa inequaity F (u), v u 0 v U PROOF: Let u U be the optima soution and choose v U By the convexity of U ad we have w t = u + t(v u) U ad t [0, 1]. 8

Now by the optimaity of u yieds Then F(w t ) F(u) 0 for a t [0, 1]. F (u), v u U,U = im t 0 F(u+t(v u)) F(u) t 0 On the other hand, using the convexity of F and for a u U ad we have 0 F (u), v u U,U = im t 0 F(u+t(v u)) F(u) t tf(v) (1 t 1)F(u) im t 0 t = F(v) F(u). Hence u is optima contro. The mutigrid agorithm is appied to find the optima contro of the optimaity system(chapter 4). To appy this agorithm we need the adjoint/costate variabe which can be found by soving the adjoint eiptic differentia equation of the given state partia differentia equation. To find the adjoint eiptic partia differentia equation, the Lagrangian Principe is used. 2.1.3 Lagrange Principe In this section we appy the Lagrange Principe to derive the optimaity system for the distributed contro probem J(y(u), u) := 1 2 y(u) y d 2 L 2 (Ω) +δ 2 u 2 L 2 (Ω) which has to be minimized subject to the constraints Ay(u) = f + u in Ω By(u) = g on Ω (2.9) Introducing the Lagrange mutipier p resuts in the Lagrange function: L ( y(u), u, p ) = J(y(u), u) Ay(u) f u, p L 2 (Ω) By(u) g, p L 2 (Γ) = J(y(u), u) Ay(u), p L 2 (Ω) + f + u, p L 2 (Ω) By(u), p L 2 (Γ) + g, p L 2 (Γ) From the Green s formua(2.3) we get 9

Ay, p L 2 (Ω) = y, A p L 2 (Ω) + y, Cp L 2 (Γ) By, p L 2 (Γ) (2.10) This resuts in the formuation L ( y(u), u, p ) = J(y(u), u) y(u), A p L 2 (Ω) y(u), Cp L 2 (Γ) + By(u), p L 2 (Γ) + f + u, p L 2 (Ω) By(u), p L 2 (Γ) + g, p L 2 (Γ) = J(y(u), u) y(u), A p L 2 (Ω) y(u), Cp L 2 (Γ) + f + u, p L 2 (Ω) + g, p L 2 (Γ) This impies that we are ooking for the optimaity conditions of the inear optimization probem under the assumptions 1. for a u U there exists a unique y = y(u) such that Ay(u) = f + u in Ω, By(u) = g on Ω 2. U ad is convex, bounded and cosed. Theorem 2.3. Let (y(u), u) be an optima soution then under the assumptions(1-2) there exists a Lagrangian mutipier p such that the optimaity system hods Ay(u) = f + u state equation. L y ( y(u), u, p ) = 0 adjoint equation L u ( y(u), u, p ), v u U,U 0 for a v U ad From the theorem(2.2), we can observe that u U ad is an optima contro iff This means that F (u), v u 0 v U ad y(u) y d, y(v) y(u) L 2 (Ω) + δ u, v u L 2 (Ω) 0 v U ad Now et p(u) be chosen such that A p(u) = y(u) y d in Ω Cp(u) = 0 on Ω (2.11) and we characterize the optima contro u by u = δ 1 p(u) (2.12) 10

From the reations (2.9), (2.11) and the Green formua(2.3) we get y(u) y d, y(v) y(u) L 2 (Ω) Since U ad is a inear space, = A p(u), y(v) y(u) L 2 (Ω) = p(u), A ( y(v) y(u) ) L 2 (Ω) + p(u), B ( y(v) y(u) ) L 2 (Γ) Cp(u), y(v) y(u) L 2 (Γ) = p(u), v u L 2 (Ω) p(u) + δu, v u ) L 2 (Ω) 0, v U ad We obtain (2.12). Now eiminating u in (2.9) by (2.12), we get a couped system of two eiptic partia differentia equations. Ay(u) = f δ 1 p(u) in Ω By(u) = g on Ω A p(u) = y(u) y d in Ω Cp(u) = 0 on Ω (2.13) A mutigrid method is appied to a couped system(chapter5). The boundary conditions are Neumann type. By and Cp can be repaced by y(u) Γ and p(u) Γ to get the anaogous resuts for the Dirichet probem. The optimaity system can be represented in different ways. The next section deas with the representation of the contro on the boundary that is the boundary contro. 2.2 Boundary Contro Probem This section invoves the study of the exampes of the boundary contro probems with the boundary condition either Neumann or Dirichet type. In this case the contro is restricted on the whoe or on some parts of the boundary of the domain Ω. The first order necessary conditions wi expicity be given for the two boundary contro probems. 2.2.1 Neumann Boundary Contro Probem The boundary contro is an exampe of the contro probem invoving an eiptic differentia equation with Neumann boundary conditions where the contro in defined on Ω. Let y(u) be defined by Ay(u) = f in Ω By(u) = u + g on Ω (2.14) for u U ad = U = L 2 (Γ). The cost functiona J(y(u), u) := 1 2 y(u) y d 2 L 2 (Ω) +δ 2 u 2 L 2 (Γ) which has to be minimized where y d L 2 (Ω) is the target state. 11

The optima contro is defined on the boundary and is given by u = δ 1 p(u) Γ (2.15) where p(u) is the soution of the adjoint eiptic differentia equation(2.11). The resuting optimaity system Ay(u) = f in Ω By(u) Γ = g δ 1 p(u) Γ on Γ A p(u) = y(u) y d in Ω Cp(u) = 0 on Γ (2.16) The system above describes the behaviour on the boundary. The other way in which the boundary contro probem with Neumann boundary condition can be represented is: Consider the cost functiona which describes the behaviour on the boundary J(y(u), u) := 1 2 y(u) y d 2 L 2 (Γ) +δ 2 u 2 L 2 (Γ) which has to be minimized where y d L 2 (Γ) is the target state. Let y(u) fufis the state equation ( 2.14) with (2.15) yieds the optima contro provided p(u) soves the adjoint partia differentia equation A p(u) = 0 in Ω Cp(u) Γ = y(u) Γ y d on Γ (2.17) Then the resuting optimaity system becomes Ay(u) = f in Ω By(u) Γ = g δ 1 p(u) Γ on Γ A p(u) = 0 in Ω Cp(u) Γ = y(u) Γ y d on Γ which is a couped by the boundary condition. (2.18) 2.2.2 Dirichet Boundary Contro Probem Let the target/desired state y d L 2 (Ω) and the set of admissibe contros defined as U ad = U = L 2 (Γ). Let y(u) be defined by Ay(u) = f in Ω y(u) = u + g on Ω (2.19) The cost functiona J(y(u), u) := 1 2 y(u) y d 2 L 2 (Ω) +δ 2 u 2 L 2 (Γ) which has to be minimized. 12

The optima contro on the boundary is given u = δ 1 p(u) Γ where p(u) is the soution of the adjoint eiptic differentia equation. A p(u) = y(u) Γ y d in Ω p(u) = 0 on Γ (2.20) Then the resuting optima system becomes Ay(u)) = f in Ω y(u) Γ = g δ 1 p(u) Γ on Γ A p(u) = y(u) Γ y d in Ω p(u) = 0 on Γ (2.21) which is a couped by the boundary condition. 2.2.3 Existence and Uniqueness of the Function Minimum In this section, the existence and uniqueness of the contro function as the minimizer of the cost functiona is anayzed. We have the cost functiona J(y(u), u) with J : Y U R is investigated with y(u) and u reated by the operator S : U Y. With the new cost functiona defined by (2.6, 2.7, 2.8). Theorem 2.4. (Existence of the minimiser) Let U be a Hibert space, U ad U non-empty, bounded, cosed and convex and F : U R weaky ower semi-continuous and U radiay unbounded, that is 1. u n u foows that F(u ) im inf n F(u n )(Weaky ower semi-continuous). 2. u foows that F(u).(radiay unbounded) With inear and bounded operator S, assume F is bounded from beow by a constant C R with C F(u) for u U. Then the minimization probem min u U ad F(u) has a soution u U ad. PROOF. Let F be bounded from beow, it impies the existence of the infimum. This means that there is a sequence (u n ) U ad with F = im n F(u n ). By the radia unboundedness of F, (u n ) is bounded, and there exists a weaky converging subsequence (u nk ) of u n such that 13

u nk u as k. Since U ad is cosed and convex subset of the Hibert space it is weaky cosed and hence u U ad. Since S is inear and bounded, it foows weak convergence in Y; Su nk Su as k. Due to weaky ower semi continuity of F we obtain F(u ) im inf n F(u nk ) = F for k. Since F is the infimum if and ony if F(u ) = F Hence the minimum is attained at u. Theorem 2.5. (Uniqueness) Let the conditions of theorem(2.4) hods. If additionay F is stricty convex, then there exists at most one optima contro. 14

Chapter 3 DISCRETIZATION This chapter deas with the discretization of the optimization mode(optimaity system(2.13)) probem, distributed contro by the finite eement method. The ingredients of the finite eement discretization are the variationa formuation where the test spaces are introduced, the existence and uniqueness of the soution. The finite eement method described here is based on the references (Braess([3]), Knabner and Angermann([11])). The convergence, stabiity and the approximation property of the soution are aso going to briefy discussed. The discretization of the optimaity system and finay the discretization of the operator K of the integra equation characterisation of the optimaity system are aso going to be demonstrated. In this chapter the Gaerkin method is used to discretize the distributed contro probem (2.4, 2.5) with Neumann boundary condition. From now on we assume that A = + I and B = n. 3.1 Variationa Formuation Assuming the existence of a cassica soution, the foowing steps are performed in genera: Step 1: Mutipication of the differentia equation by test functions that are chosen compatibe with the type of boundary condition and subsequent integration over the domain Ω. Step 2: Integration by parts and incorporation of the boundary conditions in order to derive a suitabe biinear form. Step 3: Verification of the required properties ike eipticity and continuity(knaber and Angermann([11])). The soution of the eiptic differentia equation is based on the weak formuation in the Soboev spaces. We demonstrate the variationa formuation of the state partia differentia equation and that for the adjoint foows immediatey. 15

We choose the test function w W = H 1, mutipy by the test function and integrate by parts ( y + y) wdx = (f + u) wdx Ω Ω y wdx ( y n)wds + y wdx = (f + u) wdx Ω Ω Ω y wdx + Γ Ω y wdx Ω = Ω (f + u) wdx + Γ g wds Define the biinear form the inear form a : H 1 H 1 R a(y, w) = y wdx + y wdx (3.1) Ω Ω F(w) = The variationa formuation Find y H 1 such that Ω F : H 1 R (f + u) wdx + Γ g wds (3.2) a(y, w) = F(w) w W = H 1 (3.3) Definition 3.1. Let W be a Hibert space. A biinear form a : W W R is caed: symmetric if a(y, w) = a(w, y) w, y W. continuous if a(y, w) C y W w W. coercive if a(w, w) γ w 2 W. The variationa formuation for the adjoint and the state eiptic partia differentia equations for the distributed optima contro system(2.15, 2.16), for the given contro u a(y(u), w) = f + u, w L 2 (Ω) + g, w L 2 (Γ) for a w W = H 1 a(w, p(u)) = w, y(u) + y d L 2 (Ω) for a w W = H 1 In this work we consider a(y(u), w) = a(w, p(u)), so it wi be enough to assembe a(y(u), w). The centra theorem the ensure the unique sovabiity of the variationa probem is the Lax Migram theorem which hods for convex sets. 16

Theorem 3.2. The Lax Migram Theorem Let W be a Hibert space. Let a : W W R be a continuous, coercive biinear form and F : W R a inear functiona. Then the variationa equation a(y, w) = F(w) for a w W (3.4) has a unique soution y W. Moreover, the soution satisfies the estimate y W 1 γ F W (3.5) 3.2 Finite Eement Method In this section we discuss the finite eement discretization of our domain Ω = (0, 1) 2. This can be achieved with the foowing genera steps. 1. Discretize the domain Ω, that is to divide the soution region into finite eements(subdomains). The soution domain is divided into severa simper finite eements, where each eement has a simpe geometry, so appropriate assumed soutions can easiy be written for the eement. In this thesis the sub-domains are the trianges. 2. Estabish the matrix equation for the finite eement which reates the noda vaues of the unknown function to other parameters. In this work we use the Gaerkin method. 3. To find the goba equation system for the whoe soution, a eement equations must be assembed. This invoves the combination oca eement equations for a eements used for discretization. The Neumann boundary condition is incorporated on the right hand side 4. Soving the goba equation system. Since the finite eement goba equation system is typicay sparse, symmetric and positive definite, direct and iterative methods can be used for the soution. In this work the mutigrid method which is an iterative sover is used. The noda vaues of the sought function are produced as a resut of the soution, since approximating functions are determined in terms of noda vaues of as physica fied which is sought. 3.2.1 Formuation of the Finite Eement Method As we have mentioned in (1-4) above, severa approaches can be used to transform the physica formuation of the probem to its finite eement discrete state. Since the formuation of the probem is described by an eiptic differentia equation then the most popuar method of its finite eement formuation is the Gaerkin method. 17

The Gaerkin Method The Variationa Probem of the mode probem is given by Find y W : a(y, w) = F(w), where w W W is the soution space with W = H 1 a(.,.) is a continuous biinear form on W W. F(.) is a continuous inear form on W. The idea is to repace the infinite dimensiona space W by a finite dimensiona space W h W which consists of a fixed degree associated with the subdivisions of the computationa domain. Let H for N 0 be a sequences of subspaces of the finite dimensiona subspace W h defined on each eve of refinement. H H +1 W h W = H 1 where H +1 subspace which correspond to Ω +1 is the refinement of Ω with subspace H such that Ω Ω +1 Ω. An exampe is iustrated in the figure beow where Ω 0 is the initia mesh = 0 and Ω 1 is a refinement of Ω 0 as shown on the figure beow 18

Figure 3.1: Left: Initia Mesh Ω 0, Right: One Refinement Ω 1 Now consider the variationa probem in the finite dimensiona subspace Find y H : a(y, w ) = F(w ), w H Suppose that the dimh = N and to cacuate the soution we choose the basis functions which are ineary independent with a sma support } H = span {φ 1, φ 2,..., φ N Now expressing the approximate soution y in terms of the basis functions Then the new variationa probem reads y = Σ N i=1 ξ iφ i ξ i R i = 1, 2,..., N 19

Find ξ = (ξ 1, ξ 2,..., ξ N ) R N : Σ N i=1 a(φ j, φ i )ξ i = F(φ j ) j = 1,..., N (3.6) This is a inear system of equations for ξ = (ξ 1, ξ 2,..., ξ N ) T with matrix A = a(φ j, φ i ) R N N and b j = F(φ j ) R N 1 Since φ i have sma support, a(φ j, φ i ) = 0 for most i and j the matrix A is sparse(most of the entries are zeros). Lemma 3.3. Let a be a coercive, biinear form then the matrix A is positive definite. A symmetric biinear form impies symmetry of matrix A. 3.2.2 Finite Eements In this section the construction of the Finite Eement method is described. Let H W h consists of continuous piecewise inear functions. Let Ω = (0, 1) 2 be a bounded domain, with boundary Ω = Γ. The domain Ω can be covered with a finite number of trianges as shown on the figure beow Figure 3.2: An exampe of a trianguation 20

The subdivision of the domain into trianges is caed trianguation. Definition 3.4. (Trianguation) } Let Ω R 2. A partition T h = {T 1, T 2,..., T N of Ω into trianguar eements is caed admissibe provided the foowing properties hod 1. Ω = T Th T. 2. if T i T j consists of exacty one point, then it is a common vertex of T i and T j. 3. if for i j, T i T j consists of more than one point, then T i T j is a common edge of T i and T j. Where h := max 1 i N diam(t i ) denotes the maximum diameter of a T T h. The items 2 and 3 impies that T i and T j are adjacent. With each node we associate the basis function φ which is equa to 1 at the node and 0 at a other nodes. For exampe at node x j, we have that φ i (x j ) = { 1 if i = j 0 if i j φ is assumed to be a continuous function on Ω and inear on each of the trianges. Suppose that the nodes are abeed 1,..., N and et φ 1 (x, y),..., φ N (x, y) be the corresponding basis functions. The functions φ 1,..., φ N are ineary independent and span a N dimensiona inear subspace H at eve. For our the mode probem the finite eement can be restated as Find ξ = (ξ 1, ξ 2,..., ξ N ) R N : Σ N i=1 Ω ( φ i x for j = 1,..., N. φ j x + φ i φ j y y + φ iφ j )ξ i dxdy = (f + u)φ j dxdy + gφ j ds (3.7) Ω Ω Letting A = (a ij ) and b = (F 1,..., F N ) T where a ij = a ji = Ω ( φ i φ j + φ i φ j + φ x x y y iφ j )dxdy b j = Ω ( f + u ) φj dxdy + Ω gφ jds The finite eement approximation can be written as a system of inear equation A ξ = b 21

3.2.3 Assembing of Matrices In subsection(3.2.2), we have deveoped a system of equations. We now want to cacuate each of these terms invoved in these systems. It is important to note that each φ i defined in the previous subsection has support over at most two eements thus when a reguar trianguation T h has been generated for the domain, we can cacuate the stiffness matrix A, the mass matrix N and the right hand side b which is the sum of f and g at each eve. For the stiffness matrix and the mass matrix we have A ij = T T h T φ i φ j dx, N ij = T T h T φ iφ j dx and for the right hand side we consider the construction of f and g. f j = T T h T fφ jdx and g j = T T h E gφ jds Definition 3.5. The oca stiffness and mass matrices A T, NT R 3 3 are defined by (A (T) ) ij = φ i φ j dx for i, j = 1, 2, 3 (3.8) T (N (T) ) ij = T φ i φ j dx for i, j = 1, 2, 3 (3.9) For a trianguar eement T T h, with vertices (x 1, y 1 ), (x 2, y 2 ), (x 3, y 3 ) and φ 1, φ 2, φ 3 be the corresponding basis functions in H. We denote the area of the triange by T with T = 1 1 1 x 1 x 2 x 3 y 1 y 2 y 3 (3.10) Since φ 1 (x, y) φ 2 (x, y) φ 2 (x, y) = 1 1 1 x 1 x 2 x 3 y 1 y 2 y 3 1 1 x y (3.11) It can easiy be computed that φ i (x, y) = 1 ( yi+1 y i+2 2 x i+2 x i+1 ) indices are moduo 3 22

Then T φ i φ j dx = 1 ( ) ( y yi+1 y i+2 x i+2 x j+1 y j+2 i+1 4 T x j+2 x j+1 This means that we get the oca stiffness matrix for the term T φ i φ j dx. This can be expressed as ) A (T) = T Grad 2 GradT where the Grad = 1 1 1 x 1 x 2 x 3 y 1 y 2 y 3 1 0 0 1 0 0 1 (3.12) Now for the mass matrix which is from the term T φ iφ j dx. Using the quadrature rue we get which is N (T) = T 12 (1 + δ ij) i, j = 1, 2, 3 N (T) = T 12 2 1 1 1 2 1 1 1 2 (3.13) After computing the eement stiffness and mass matrices we sum over a the eements to obtain the goba matrices. As mentioned aready above the stiffness and the mass matrix for the state and the adjoint equations take the same form. The Right Hand Side Since we are going to appy the mutigrid method to a system of eiptic partia differentia equations, we need ony to assembe the desired state. The soution procedure wi require us to input the initia contro into the state equation, sove for the state, input the state into the adjoint equation and finay find the optima contro. To assembe for the desired state, we use the quadrature rue. To assembe the term invoving y Ω d,φ j dx. This can be written as y d, φ j dx = y d, φ j dx (3.14) T T h Ω We approximate the integra y T d,φ j dx using the midpoint rue such that (x s, y s ) is the centroid of each eement. Numerica reaization of this term aso in the simpest case invoves one point numerica quadrature. The integra can be evauated as y d, φ j dx = T 3 y d,(x s, y s ) (3.15) T And this is the same as mutipying the eement mass matrix by the vaue of y d, at the nodes. (y d, ) (T) = N (T) y d (x i, y i ) i = 1, 2, 3 (3.16) 23 T

To assembe the term invoving f Ω φ j dx. This can be written as f φ j dx = f φ j dx (3.17) T T h Ω We approximate the integra f T φ j dx using the midpoint rue such that (x s, y s ) is the centroid of each eement. This can be written as f φ j dx = f(x s, y s )φ j dx (3.18) T T h Ω Numerica reaization of this term aso in the simpest case invoves one point numerica quadrature. The integra can be evauated as f φ j dx = T 3 f (x s, y s ) (3.19) T T T Now for the boundary term g j = T T E g φ j ds (3.20) where E is the edge of one of the eement. Let (x m, y m ) be the midpoint of the edge. Then the integration over the edge yieds g φ j ds g (x m, y m ) E (3.21) where E is the ength of the edge. E 3.2.4 Approximation Properties In this section, having estabished the construction of the finite eement for the state and the adjoint eiptic equation, now we estabish approximation properties of our soutions. We have the variationa formuation for the state and the adjoint eiptic partia differentia equations a(y(u), w) = f + u, w L 2 (Ω) w H 1 a(w, p(u)) = w, y(u) y d L 2 (Ω) w H 1 We keep in mind that the two equations have the same biinear a(.,.) form which is W eiptic, continuous and symmetric. Let H for N 0 be a sequence of subspaces with 24

H H +1 H s s 1. In our case s = 1. Then the discrete soutions at each eve of discretization is defined by Definition 3.6. Find discrete soutions y (v), p (v) H such that a(y (v), w ) = f + v, w L 2 (Ω) w H (3.22) a(w, p (v)) = w, y (v) y d, L 2 (Ω) w H (3.23) This means that at each eve, we can compute the discrete soutions for the state, adjoint depending on the discrete contro v. The subspaces H are connected by the step size h and the approximation property. To begin the approximation property is the Céa s emma in the genera subspace.. Lemma 3.7. Let the biinear form a : W W R be continuous and W-eiptic. Suppose y and y h are the soutions of variationa formuation in W and W h W respectivey then y y h W C γ inf w h W h y w h W (3.24) PROOF: by the definition of y and y h a(y, w) = F(w) a(y h, w) = F(w) w W w W h Since W h W, by subtraction a(y y h, w) = 0 w W h Gaerkin Orthogonaity Let w h W h with w = w h y h W h W, then a(y y h, w h y h ) = 0 and by coercivity and continuity γ y y h 2 W a(y y h, y y h ) = a(y y h, y w h ) + a(y y h, w h y h ) C y y h W y w h W w h W h hence dividing by y y h W the resut foows. 25

The Céa s emma says that the accuracy of the numerica soution depends on the choice of the function spaces which are capabe of approximating the soution y. The choice of W h is important. Let the space W h be a space of inear piecewise finite eements W h = { y C 0 (Ω) : y Ti P 1, T i T } The error estimates are primariy based on the estimating the approximation error inf wh W h y w h W. From the Céa s emma the discretization error is estimated by the approximation error which is estimated by the interpoation error Let the continuous soution be reguar enough, that is y H 2 (Ω). Now for arbitrary w H 2 (Ω) define the interpoation operator Π h : H 2 (Ω) w W h Π h w Hence it foows that inf wh W h y w h H 1 y Π h y H 1 (3.25) As a consequence, it is enough to dea with the interpoation error y Π h y H 1 for convergence resuts. The main ideas are 1. ocaise the error on the trianges(eements) 2. transformation of the trianges on the reference trianges. 3. compute the oca interpoation error. 4. inverse transformation back to the trianges(eements). Foowing the ideas 1-4 above, the interpoation error in our case is given by the theorem Theorem 3.8. Let y H 2 (Ω) with Ω = (0, 1) 2. Then the interpoation error satisfies y Π h y H 1 Ch y H 2 26

Then the convergence foows from the Cèa s Lemma. Theorem 3.9. Let T a quasi-uniform trianguation of Ω. Let y and y h be the soutions of the continuous and discrete variationa equations respectivey. Then for y H 2 (Ω). y y h H 1 Ch y H 2 (3.26) Where C is independent of the eve This gives the convergence in H 1 -norm. This means that there is a inear convergence for finite eement error in H 1 -norm, that is y y h H 1= O(h) Simiary, the convergence in L 2 -norm foows for the Aubin-Nitsche theorem on shape reguar trianguation. Theorem 3.10. (Aubin-Nitsche) Suppose that Ω is a convex poygon and that T h is reguar famiy of meshes on Ω. Then y y h L 2 (Ω) Ch 2 y H 2 (3.27) for some constant C > 0 This impies that we have a quadratic convergence for the finite eement in L 2 -norm, that is y y h L 2 (Ω)= O(h 2 ). Hence for our case the foowing stabiity estimates are vaid for f = y d = 0 and g = 0 y (u) H 1 (Ω) C u L 2 (Ω) p (u) H 4 (Ω) C y (u) H 1 (Ω) 3.3 Discrete Optimaity System For the optimization probem there are two ways: first optimize, then discretize or first discretize, then optimize(hinze[10]). In this work the approach is optimize the optimization probem then discretize. The continuous optima system(2.17) was transformed into the discrete optima system by discretization using finite eement method. Consider a sequence of discretizations with different step sizes h and eve of refinement. Fix the coarsest grid size h 0 and define h = 2 h 0 N 0 = 0, 1, 2,... 27

where is the eve number. The discrete optima system resuting from the Gaerkin method is subject to the constraints min (y,u ) L 2 L 2J(y, u ) = 1 2 y y d, L 2 (Ω) + δ 2 u 2 L 2 (Ω) (3.28) A y + N y = N f + N u (3.29) A p + N p = N y N y d, (3.30) N u = 1 δ N p (3.31) where A and N are the stiffness and mass matrices at eve respectivey. Remark 3.11. The state and adjoint eiptic partia differentia equations have the same stiffness and the mass matrices at each eve The idea is to sove a set of constraints for the vaues of the state and the contro that minimise the objective function. Then the soution procedure is defined as foows(hackbusch[6]) 1. choose the initia contro u 0 2. find y = Su 0, sove the eiptic state equation, where S invoves A and N. 3. find p = S (y y d, ) 4. find u = 1 δ p 5. where S is the soution operator with S = S 6. set u 0 = u and go to (1) with the mutigrid method coming into pay. From (1-6) above it means that we can cacuate the discrete state y (v ) from the discrete contro v and the discrete adjoint p (v ) from the discrete state y (v ). Then we can define the optima discrete contro u for the distributed contro probem as u = δ 1 p (v ) (3.32) The whoe set of constraints of our discrete optima contro system (3.28-3.30)can be expressed as a matrix system A + N O N y N f N A + N O p = N y d, (3.33) O N δn u O This defines the matrix equation of our soution procedure for the optima contro. The formuation(3.33) can be characterized by an integra equation. 28

3.3.1 Integra Equation Characterizing the Optima Contro The Operator K The mapping u y(u) p(u) δ 1 p(u) is affine and defined on a inear operator K such that the optima contro(2.15) can have the representation δ 1 p(u) = Ku + q (3.34) This defines the soution procedure(hackbusch[6]). The operator K is a inear operator. The operator K or the powers of K map into the space with finer topoogy. The powers of K signifies the number of times the operator K is appied in the soution process. Let B 0 and B 1 B 0 be two Banach spaces, where B 1 is finer than B 0. K m has to satisfy K m B0 B 1 C, m 1 fixed In Ref.(5), for the choices of the two Banach spaces B 0 and B 1 for the distributed contro equation(2.15, 2.16). Let u L 2 (Ω) impies that y(u) H 2 (Ω) and p(u) H 4 (Ω), if Γ and the coefficients are smooth. If m = 1, B 0 = L 2 (Ω), B 1 = H 4 (Ω). This means that more generay, For the other exampes K m : L 2 (Ω) H 4m (Ω) continuous for a m 1 1. Considering the Distributed contro with Dirichet boundary condition K m : L 2 (Ω) H m (Ω) continuous 2. the Neumann boundary contro probem(2.19) K m : L 2 (Γ) H 3m (Γ) continuous 3. Considering the Boundary contro with Dirichet boundary condition(2.24) K m : L 2 (Γ) H m (Γ) continuous The discrete optimaity system (3.28-3.31 ) is going to be soved using the mutigrid method. Let the discrete contro v, discrete state and the adjoint y (v ), p (v ) and the desired contro u. The discrete optimaity system(3.33) simpifies to N u = 1 δ N A 1 N [ A 1 N (u + f ) y d, ] (3.35) 29

Definition 3.12. (Discrete Integra Equation) The discrete optimaity system(3.28-3.30) is equivaent to the discrete integra equation with (I K )u = q (3.36) K q = 1 δ A 1 N (A 1 N ) = 1 [ ] δ A 1 N A 1 N f y d, K is the discrete operator of K. The knowedge of the entries of the matrix K is not necessary except at the coarsest eve( = 0). The simiar approach can be used to derive the anaogous discrete integra equation for the boundary contro probem. In Hachbusch([6]), the most important requirement for K is that K m B 0 B 1 C, N 0. This wi be appied in the next in chapter 4, on the convergence anaysis of the mutigrid method. The soution of the discrete integra equation is ceary the optima contro. To sove the discrete system the mutigrid method(chapter 4) is appied. 30

Chapter 4 Soution of the Discretized Optima Contro System In this chapter we want to deveop a mutigrid agorithm for soving the discretized optima contro system. The main goa being to find the pair (y, u ) of the discrete contro and the discrete state variabes at the finest eve. To cacuate this, an mutigrid agorithm is deveoped over the discrete integra equation that characterizes the discrete optima contro. As has been aready been highighted in section(3.3.1), the discrete optima constraints are reduced to one system of eiptic equations. So the mutigrid agorithm wi require ony the numerica soution of a sequence of singe eiptic equations. The optima contro can be obtained by soving one system of two eiptic equations. In this work we wi iustrate the numerica treatment of a discrete optima contro probem by appying the mutigrid method(mgm). 4.1 Mutigrid Agorithm In this section the mutigrid agorithm is deveoped for the distributed contro probem with the Neumann boundary condition(2.9). In this case we take m = 1 for the power of integra operator K. The mutigrid agorithm adopted in this work is based on the work by Hachbusch([6]). The main ingredients of the mutigrid iteration are the smoothing and coarse grid correction. The coarse grid correction process is carried out by a restriction, coarse grid sove, interpoation. Let N 0 be the refinement eves. For = 0 the equation u = K u + q where u is the desired contro, is soved exacty by LU-decomposition of I 0 K 0. At this eve the entries matrix K 0 are known by evauation of K 0 v 0 + q 0 for q 0 = 0 and a unit vectors v 0. Smoothing: for > 0, that is if the eve of refinement is not the coarsest, firsty the initia contro u ν at the eve (finest eve) is smoothed by u ν+1 2 = K u ν + q (4.1) where q and K are defined in (3.35, 3.36). For q is invoved by f, g, y d, and if we consider f, g = 0, then q wi be invoved by the desired state y d, ony. So we need to sove K u separatey. The expicit iustration foows 31

1. We need to sove the equation u ν 1 = K u ν choose the initia vaue u ν 0 With the initia contro, sove for the state variabe using the equation A y = N u ν 0 where A and N are the stiffness and mass matrices respectivey at eve. With y sove for the adjoint variabes using the equation A p = N y with same matrices defined above. With p find the new contro using the reation u ν 1 = δ 1 p. 2. finay, u ν+1 2 = u ν 1 + q 3. repeat the smoothing process(1) with the initia u ν+1 2 u ν 2 = K u ν+1 2 + q 4. resut(2) is a smoother contro than the initia one. to the equation, to get Cacuating the defect: After the smoothing process we cacuate the corresponding defect d = (I K )u ν+1 2 q (4.2) = u ν+1 2 K u ν+1 2 q (4.3) [ ] = K u ν u ν+1 2 (4.4) = u ν+1 2 u ν 2 (4.5) From the two smoothing processes above the defect can be expressed as d = u ν+1 2 u ν 2 Restrict the defect: The restriction is an inter-grid transfer process. The process transfers the defect from the finer grid to a coarser grid. By a suitabe restriction r, 1 : B0 B0 1 to a coarser grid, we obtain the resut d 1 = r, 1 d B 1 0 (4.6) Approximate on the coarser grid that is on the eve 1 by w 1 = (I 1 K 1 ) 1 d 1 (4.7) two iterations of the mutigrid method on the eve 1. Proongate the Approximate: The proongation/interpoation is an inter-grid transfer process. The process transfers the smooth error from the coarser grid to a finer grid. It is a inear mapping. By a suitabe proongation p 1, : B0 1 B0 to a finer grid and coarse grid correction, we obtain the resut u ν+1 = u ν+1 2 p 1, w 1 (4.8) 32

The above description gives the two grid agorithm. Appying the two grid recursivey resuts in mutigrid agorithm. Defining the mutigrid method(mgm) recursivey. We define the agorithm MGM at eve > 0 by means of the agorithm MGM 1 corresponding to the coarser grid. Now we define the mutigrid agorithm. Mutigrid Agorithm We define the mutigrid agorithm at eve as MGM (u new, u od, q ) where u new is the output of one step of the mutigrid agorithm at eve. u od is the input at eve. q is defined impicity by f, g, y d, at eve. u ν := u od u ν+1 =: u new Agorithm MGM (u new, u od, q ) if = 0(coarsest grid) ese > 0 define MGM (u new, u od, q ) 1. Smoothing u 0 = (I 0 K 0 ) 1 q 0 = MGM 0 (u 0, q 0 ) defect computation ũ = K u od + q 2. restrict the defect d = ũ K ũ q 3. approximate soution d 1 = r, 1 d v 1 = K 1 v 1 + d 1 33

4. Appying two iterations of MGM 1 at the recursive ca: Set v (0) 1 = 0 compute compute v (1) 1 = MGM 1(v (1) 1, v(0) 1, d 1) v (2) 1 = MGM 1(v (2) 1, v(1) 1, d 1) If = 1, one ca of MGM 0 (v 2 0, d 0) is sufficient. 5. Correction Step Define the new iterate by u new := ũ p 1, v (2) 1 Remark 4.1. From the above mutigrid agorithm The entries of the matrix K may be unknown We need the performance of mapping v K v which invoves v od p (v) δ 1 p (v) = v new y (v) The whoe smoothing process invoves the mapping v K v +q where q invoves f, g, y d, restriction: r, 1 is a restriction operator that takes the fine mesh function d to the coarse grid function d 1. In this work, restriction was be chosen by simpy taking the fine-grid vaues at coarse-grid points (injection) of the parent nodes of the triange eements. proongation: p 1, is the proongation operator of the coarse grid function v (2) 1 to the fine mesh function w. This invoves the parent grid points(coarse grid points) and their intermediate vaues obtained by averaging In this work we use a mutigrid w-cyce which starts at a finest eve. The fine eve soution is then transferred to next coarser eve, (restriction). After some reaxation(smoothing) cyces on the coarse eve, the soution is then restricted to next coarser eve unti the coarsest eve is reached. The soution obtained at the coarsest eve is than interpoated back to the finer eve(proongation). The soution from this finer eve is interpoated to next finer eve after some reaxation iterations. The soution is proongated ti the finest eve is reached. The whoe process is repeated unti satisfactory convergence is reached. 34

4.2 Convergence of the Mutigrid Method In this section we ook at the convergence anaysis of the mutigrid method in finding the optima contro with an aim of estabishing the convergence rates. From the chapter 3 we reaize that: Integra characterisation of the continuous optimaity system u = Ku + q and its corresponding discrete system u = K u + q which represent the desired contro. K is a inear operator such that K and powers of K map into a finer topoogy with B 1 B 0 such that K m : B 0 B 1, K m B0 B 1 C 1, 0 1 (4.9) Let B 1 B 0 be discrete vector spaces of the discrete contro v Assumption 4.2. Let, 1 be the two eves with finer and 1 coarser The norms of discrete spaces B 0 and B 1 with norms 0, and 1, and the continuous spaces B 0 and B 1 are connected by the continuous restriction and proongation operators R : B i B i, i = 0, 1 and P : B 0 B 0 respectivey. The corresponding discrete restriction and proongation operators r, 1 : Bi Bi 1 and p 1, : Bi 1 Bi for i = 0, 1 respectivey. The continuous and discrete integra equation matrices is invertibe and bounded (I K) 1 B0 B 0 C 2, (I K ) 1 B 0 B 0 C 2,. (4.10) This is the stabiity condition. The continuous and the discrete operators bounded K B0 B 0 C 3, K B 0 B 0 C 3,. (4.11) Anaogous to (4.9) is K B1 B 1 C 3, K B 1 B 1 C 3,. (4.12) K m B 0 B 1 C 1, 0, 1, (4.13) 35

The discrete restriction and proongation operators are bounded r, 1 B 0 B C 1 4, 0 p 1, B 1 0 B C 0 4,. (4.14) and aso Then there exists r, 1 B 1 B C 1 4, R B1 B C 4,. (4.15) 1 1 The condition P : B 1 B 1 with R P = I, P B 1 B 1 C 4 (4.16) n 1 N from B1 to B 1 I p 1, r, 1 B 1 B 0 C 5(n 1 ) α (4.17) dimension of coarser grid, α 0 means that the smooth functions 0 can be approximated. Remark 4.3. From (4.9), (4.10) and (4.12) it can be concuded that The continuous and discrete integra equation matrices is invertibe and bounded (I K) 1 B1 B 1 C 2 := C 1 C 2 + 1 + C 3 +... + C m 1 3 (4.18) PROOF Let q B 1, u := (I K) 1 q, with the repeated appication of K from the beginning u := Ku + q, we have Then from (4.9) we get u = K 2 u + Kq + q = K m u + K m 1 q +... + Kq + q K m u 1 C 1 u 0 = C 1 (I K) 1 q 0 (4.10) C 1 C 2 q 0 (4.9) C 1 C 2 q 1 and the term K υ q with 0 υ m 1 using (4.13) gives 36

K υ q 1 C υ q 1 Hence the resut. The consistence condition is formuated [ (I K )R R (I K) ] u 0, = (K R R K)u 0, = (I K )R u R q 0, C 6 n β (4.19) q 1 n N dimension of finer grid and β > 0 Stabiity(4.10) and Consistency(4.19) impies convergence u R u 0, C 7 n β q 1 (4.20) ANALYSIS OF THE MULTIGRID ALGORITHM An iteration of singe mutigrid step consists of a combination of smoothing step and a coarse grid correction step. The foowing reations resut from the appication of each step The exact soution is given by the reation Smoothing: is done by the appication of the reation u = K u + q (4.21) appy K m-times m > 1 we have the error u ν ũ := K u ν + q (4.22) Cacuating the defect ũ u = K m (u ν u ) (4.23) = K m u ν (4.24) From the exact soution and the defect reations we get d = (I K )ũ q (4.25) 37

u = ũ (I K ) 1 d (4.26) From this reation we have d = (I K )(ũ u ) (4.27) Coarse grid correction: Produces the reation for the new iterate u ν+1 = ũ p 1, (I 1 K 1 ) 1 r, 1 d (4.28) }{{} = ũ p 1, (I 1 K 1 ) 1 r, 1 (I K )(ũ u ) (4.29) 4.27 The error of the new iterate, subtract the exact soution from both sides u ν+1 u = ũ p 1, (I 1 K 1 ) 1 r, 1 (I K )(ũ u ) u u ν+1 = ũ u p 1, (I 1 K 1 ) 1 r, 1 (I K )(ũ u ) = }{{} 4.23 = K m u ν p 1, (I 1 K 1 ) 1 r, 1 (I K )K m u ν [ I p 1, (I 1 K 1 ) 1 r, 1 (I K ) ] K m u ν The above reation for the error can be expressed as where M is the iteration matrix given by u ν+1 = M u ν (4.30) M = [ I p 1, (I 1 K 1 ) 1 r, 1 (I K ) ] K m (4.31) The idea is to estabish the convergence rates for our mutigrid agorithm which are defined by the reation u ν+1 0 u ν 0 = M B0 B 0 (4.32) The convergence property of the iterative process depends on M B0 B 0 38

Theorem 4.4. Let, N and 0 < σ n 1 n < 1, then from the conditions (4.9), (4.10), (4.11), (4.12), (4.13), (4.14),(4.15), (4.16), (4.17) and (4.20) it hods M B0 B 0 C 10 n σ The method converges for sufficienty arge n. + C 11 n β PROOF Let w B 0 with norm w 0, = 1. and v = K m w (by smoothing). Appying (4.13) we get v 1, C 1 (4.33) Since the right hand side cances out, set defect: d = v K v restriction: d 1 = r, 1 d approximation: v = (I 1 K 1 ) 1 d 1 With the continuous soution v B 1 gives defect v Kv = d := Pd The new iterate appying the iteration matrix to initia vaue is given by M w = v p, 1 v 1 (4.34) The equation (4.34) can be divided into three parts 1. error after smoothing v R v where R is a continuous restriction operator 2. proongation of the error p 1, (R 1 v v 1 ) 3. proongation of the continuous vaue R v p 1, R 1 v Now anaysing the items (1-3) For (1) show that it is bounded v R v 0, }{{} 4.20 }{{} 4.16 }{{} 4.12 }{{} 4.33 C 7 n β d 1 C 4 C 7 d 1, n β (1 + C 3 )C 4 C 7 n β v 1, C 1 (1 + C 3)C 4 C 7 n β 39

For (2) we foow the same process p, 1 (R 1 v v 1 ) 0, }{{} 4.14 }{{} as above }{{} 4.15 }{{} as in 1 since 0 < σ n 1 n < 1 C 4 R 1 v v 1 0, C 2 4 C 7(n 1 ) β d 1 1, 1 C 3 4C 7 (n 1 ) β d 1, C 1 (1 + C 3)C 3 4 C 7σ β n β For the third we use the expression (4.17) and the remark (4.3) R v p 1, R 1 v 0, }{{} r, 1 R =R 1 (I p 1, r, 1 )R v 0, since 0 < σ n 1 n < 1 }{{} 4.17 }{{} 4.15 }{{} remark(4.3) }{{} 4.16 }{{} as above Coecting things together, the resut foows with C 5 (n 1 ) α R v 1, C 4 C 5 (n 1 ) α v 1 C 2 C 4C 5 (n 1 ) α d 1 C 2 C2 4 C 5(n 1 ) α d 1, C 1C 2(1 + C 3 )C 2 4C 5 σ α n α C 10 = C 1 C 2 (1 + C 3)C 2 4 C 5σ α and C 11 = C 1 (1 + C 3)C 4 C 7 (1 + C 2 4 )σ β. Finay, from the above proof we can deduce that M B0 B 0 C 10 n σ + C 11 n β where C = C 10 + C 11 and max(σ, β) = β M B0 B 0 C 10 n σ + C 11 n β (4.35) Cn β (4.36) Concusion 4.5. The rate of convergence of the mutigrid method on the eve N 0 is proportiona to h β for some β > 0. This means that the estimate 40

u ν+1 u B 0 = Ch β u ν u B 0 (4.37) where u is the discrete exact soution, hods for two consecutive iterates. Since u is unknown we use the continuous contro u to get thet convergence rates of the mutigrid agorithm. This is achieved by integrating over each triange eement(chapter5, p.44). Hachbusch([5]) concuded that rate of convergence is proportiona to by a factor h β which means β > 0 41

Chapter 5 Numerica Resuts In this chapter we present the resut of a distributed optima contro probem with Neumann boundary condition. We pay particuar attention to the computationa performance of the proposed mutigrid scheme as a sover of the distributed contro probems. In chapter 2 we considered the distributed contro probem(2.4, 2.5), transformed it into an optimaity system(2.17) and finay the characterization by integra equation(3.34, 3.35) as appied by Hachbusch([6]). We consider exampes where f = g = 0 and the right hand side q wi ony depend on the target state y d. The numerica treatment is given to the integra equation which characterises the optimaity system. Impementations were performed on a windows XP patform with 1.6 GHz speed inte(r) processor by using Matab 7 programming anguage. For a the exampes, we approximate Ω = (0, 1) 2 by a trianguar mesh and use the Matab pdetoo for the mesh generation. The mesh data describes the trianguation and it consists of a ist of coordinates for nodes p(array of coordinates), geometry g, a ist of trianges t(array of the vertices of p), edge connections e and a ist of a edges that describe the Neumann boundary(has no contribution in this work since the Neumann condition is zero). The mesh is stored for each refinement and the assembed matrices are aso stored for each refinement eve. The Mesh In this work we use the structured mesh and reguar refinements. The meshes are generated by the matab pdetoo. Since the mutigrid method requires a hierarchy of grids which are produced by successive refinements, we need to choose the coarse mesh(shoud be as coarse as possibe), finest mesh which corresponds to the maximum eve of refinement. The figures beow show an exampe of the refinement eves(the exampes beow we use the coarse eve to have 25 nodes). 42

Figure 5.1: Leves of refinement The tabe beow shows the refinement eves and the number of grid points for each eve. 43

Refinement Leve() mesh size(h ) Nodes(number of grid points) 1 0 25 4 1 1 81 8 1 2 289 16 1 3 1089 32 1 4 4225 64 1 5 16641 128 Tabe 5.1: Refinement Leves and Number of Nodes Since we have the Neumann condition, the number of nodes is the same as the degrees of freedom. In this work we consider the coarse eve to have 25 nodes. The foowing cases for the exampes which we tacke numericay in this chapter are when the exact soutions for u, p, y are known. testing with different target state. testing with different initia contro u 0 at the finest eve that is the initia contro u 0 = 0. the initia contro u 0 = exact soution We have estabished from iterature that accuracy of the approximation by appying piecewise inear functions is O(h 2 ) in L 2 -norm. To compute the L 2 -error, we approximate the eement with both the numerica soution u and the exact soution u at the centroid (x(s), y(s)) of each of the trianges T i. Then the L 2 -error is cacuated according to u u L 2= area(ti ) (u(x(s), y(s)) u (x(s), y(s))) 2. We have aso estabished that the convergence rate of the mutigrid agorithm is proportiona to h β, β > 0. 5.1 Test Exampe 1 In this exampe we consider the target state depending on the exact soutions of the state, contro and the adjoint. The exact soutions for the distributed optimaity system(2.17) are y = cos(πx1) cos(πx2) (5.1) u = (2π 2 + 1) cos(πx1) cos(πx2) (5.2) p = δ(2π 2 + 1) cos(πx1) cos(πx2) (5.3) We get the corresponding desired state as 44

y d = (δ(2π 2 + 1) 2 + 1) cos(πx1) cos(πx2) The graphs of the target state and the exact soution for the contro are given beow Figure 5.2: Above: Target state. Beow: Exact soution for the contro at eve 4 45

We wi consider two cases for the initia contro exampe 1: we take the initia contro u 0 = 0. exampe 2: we take the initia contro u 0 = exact soution The goa is to find the optima contro so that the target state can be achieved by soving the integra equation(3.36) as outined in chapter 4. In the tabes (5.2) and (5.3) beow, we present the numerica resuts of the mutigrid method with = 4 and = 5 grid eves with the weighting parameter α = 1e 2 chosen. On the finest grid = 5, it is important to note that in both cases the mutigrid method converges very fast in 3 iterations. This means that in successive iterations the distance between the iterates of the contro becomes continuousy sma. The stopping criteria(toerance) was chosen to be toerance = 10 8. That is u k+1 u k L 2 (Ω) 10 8. The resuts on the tabes aso confirms a we known resut that u u k L 2 (Ω)= O(h 2 ). This means that the mutigrid method converges very fast. This feature is aso refected in the error. iterations Contro toerance error k u k L 2 (Ω) u k+1 u k L 2 (Ω) u u k L 2 (Ω) 1 10.36169892 10.36169892 0.008463995 2 10.36035543 3.446123e-6 0.009776728 3 10.36035560 8.1541817e-14 0.009767091 Tabe 5.2: = 4, nodes = 4225, δ = 1e 2, h = 1, toerance = 1e 8 64 iterations Contro toerance error k u k L 2 (Ω) u k+1 u k L 2 (Ω) u u k L 2 (Ω) 1 10.36062264 10.36762264 0.002118022 2 10.36029109 2.1515235e-7 0.002118023 3 10.36029109 3.17562278e-16 0.002443111 Tabe 5.3: = 5, nodes = 16641, δ = 1e 2, h = 1, toerance = 1e 8 128 In tabe (5.4), a detaied consideration of the iteration errors for a the eves. The tabe aso shows that the iteration errors are reduced from the coarse eve to finest eve by 1 with the fineness of the grids. This means that further refinement reduces 4 the error and as the step size becomes smaer and the iteration error approaches zero. A these confirms what the theory says. 46

eve iterations Contro error k u k L 2 (Ω) u u k L 2 (Ω) 0 1 8.27853197 2.178158212 1 5 9.79716843 0.602113822 2 3 10.22288193 0.154778976 3 3 10.33267581 0.038988659 4 3 10.36035560 0.009767709 5 3 10.36029109 0.002443111 Tabe 5.4: Convergence resuts δ = 1e 2, toerance = 1e 8 The behaviour of the error from the coarse eve to the finest eve is represented in the figure beow. The figure gives the visua representation of the behaviour of the error in tabe(5.4). It depicts a rapid convergence. Figure 5.3: Behaviour of L2-error The weighting factor of the contro has aso an impact in the resuts. The tabe beow shows the effects of the weighting factor on the performance of the method. The resuts in the tabe show that the change weighting factor ead the change in the optima contro and the error. The increase in error with increase in the weighting factor means that the weighting factor of the cost functiona shoud be chosen reasonaby sma. If we take 47

arge vaues of δ the error increases. For this work the vaue δ = 1e 2 produce optima resuts. The tabe shows that for arge vaues of weighting factor the discretization error grows. It has been noted in this work that at any grid eve the method diverges for the vaues δ 1e 3. The resuts in the tabe demonstrates the effect at finest eve. w. parameter iterations Contro error δ k u k L 2 (Ω) u u k L 2 (Ω) 1e-2 3 10.36029109 0.00244311 5e-2 3 10.36686262 0.00290681 1e-1 3 10.36679819 0.00306427 5e-1 2 10.36674483 0.00428668 7.5e-1 2 10.36674001 0.00490374 Tabe 5.5: Changes in δ, = 5, nodes = 16641, h = 1 128 Snapshot of the approximate optima contro at eve 4 Figure 5.4: Snapshot of the optima contro at =4 48

5.2 Exampe 2 In this section we focus on the same distributed optimaity system in exampe 1 and sove it with the initia contro different from zero. For exampe, we take u 0 = (2π 2 + 1) cos(πx1) cos(πx2). That is taking the exact soution as the initia contro. We observe that we achieve the same convergence resuts and that it converges fast in 2 iterations. The figures in exampe 1 are the same for this case. We present the resuts at eves 4 and 5. From the cacuations on the finest eve we achieve the foowing resuts iterations Contro toerance error k u k L 2 (Ω) u k u k+1 L 2 (Ω) u u k L 2 (Ω) 1 10.36035509 1.83907065e-5 0.00976765 2 10.36035560 1.35312409e-13 0.00976709 Tabe 5.6: = 4, nodes = 4225, δ = 1e 2, h = 1, toerance = 1e 8 64 iterations Contro toerance error k u k L 2 (Ω) u k u k+1 L 2 (Ω) u u k L 2 (Ω) 1 10.36029106 1.15190037e-6 0.002444314 2 10.36029109 2.0581872e-15 0.002443111 Tabe 5.7: = 5, nodes = 16641, δ = 1e 2, h = 1, toerance = 1e 8 128 5.3 Exampe 3 In this section we consider the case where the exact soutions are not known. We choose the target state not depending on the exact soutions. We choose the target state as y d = x 1 + x 2 and the using a zero initia contro. 49

Figure 5.5: Desired/Target State at =4 The behaviour of the contro and the stopping criteria on tabes(5.8, 5.9, 5.10) from the coarse grid to the finest grid refect that the method converges to the optima contro. eve iterations Contro k u k L 2 (Ω) 1 4 2.23636568 2 3 2.24634503 3 3 2.24879971 4 3 2.24941154 5 3 2.24946390 Tabe 5.8: Contro, δ = 1e 2, toerance = 1e 8 50

In this case we iustrate the resuts at eves 4 and 5. The resuts for a the eves are presented in the tabes(5.9) and (5.10). no. of iterations Contro toerance k u k L 2 (Ω) u k u k+1 1 2.24956732 5.06055475 2 2.249411145 2.680565e-7 3 2.249411154 5.1360268e-15 L 2 (Ω) Tabe 5.9: = 4, nodes = 4225, δ = 1e 2, h = 1, toarence = 1e 8 64 The resuts for the finest eve no. of iterations Contro error k u k L 2 (Ω) u k u k+1 1 2.24960274 5.0607125 2 2.24995638 1.674958e-8 3 2.24946390 2.0000481e-17 L 2 (Ω) Tabe 5.10: = 5, nodes = 16641, δ = 1e 2, h = 1, toerance = 1e 8 128 The figure beow represents the optima contro for this case. The resut was achieved in 5 iterations of the mutigrid method, 51

Figure 5.6: Above: Initia Contro. Beow: Approximate soution for the contro at eve 4 5.4 Concusion In this work we considered the mathematica mode of the optima contro probems governed by the eiptic partia differentia equations. We have highighted that the mode can be described by the distributed contro or boundary contro with Neumann or Dirichet boundary conditions. However, in this work we have given the mathematica treatment to the distributed contro mode governed by the eiptic partia differentia equation with a zero Neumann boundary data. The same mathematica treatment can be appied to contro probems the with Dirichet, non-zero Neumann data. Aso the same treatment can be appied to the modes where the contro is on the boundary, boundary contro optima probems. We described the existence of minimizer of the cost functiona which is the contro variabe. For the distributed contro we have shown that the optima contro probem can be expressed as a integra equation where the mutigrid method is appied to compute for the optima contro. We have estabished that mutigrid method converges by a factor proportiona to the refinement step h between consecutive mutigrid iterations. In this work we have used the W-cyce of the mutigrid method. We started with an initia guess for initia contro, smooth, cacuate the defect, restrict, sove exacty on the coarse grid then proongate and coarse grid correction. For the numerica impementation we use the finite eement with piecewise inear eements. 52