PDE Constrained Optimization selected Proofs
|
|
- Magdalen Cook
- 5 years ago
- Views:
Transcription
1 PDE Constrained Optimization selected Proofs Jeff Snider George Mason University Department of Mathematical Sciences April, 214
2 Outline 1 Prelim 2 Thms Thm Thm Ex 1 6 Ex 2 7 Nec. Cond. 8 N.C. Ex 1 9 N.C. Ex 2 1 Numerical Methods
3 Preliminary definitions X denotes a Banach space, and a function y : [a, b] X is called vector-valued. We specifically consider X := L 2 (Ω), where Ω R N is defined on the next slide. Definition C ( [a, b]; X ) is the space of functions y(x, t) continuous at every point t [a, b]. Definition L p (a, b; X ), 1 p <, is the space of measurable vector-valued functions y : [a, b] X such ( ) that y L p (a,b;x ) := b 1/p a y(t) p X dt <. Definition L (a, b; X ) is the space of measurable vector-valued functions y : [a, b] X such that y L (a,b;x ) := ess sup [a,b] y(t) X <.
4 Problem Statement Problem 3.23 y t y + c y = f in = Ω (, T ) νy + αy = g on = Γ (, T ) (3.23) y() = y in Ω. where f L 2 (), g L 2 (), and y L 2 (Ω). Assumption 3.8 Let Ω R N be a bounded Lipschitz domain with boundary Γ, and let T > be a fixed final time. Moreover, assume that functions c L () and α L (), where α(x, t) for almost every (x, t), are prescribed. : does this imply that Ω is connected?
5 Definitions Definition W 1, 2 () := { y L 2 () } y L 2 (), i = 1,..., N. x i Definition W (, T ) := {y L 2 (, T ; V ) y L 2 (, T ; V )} Definition V = H 1 (Ω), and H = L 2 (Ω) where we identify H = H. We have shown that V H V, is a sequence of dense and continuous embeddings, implying H 1 (Ω) L 2 (Ω) H 1 (Ω).
6 Theorem 3.9 Suppose Assumption 3.8 holds. Then the parabolic initial-boundary value problem (3.23) has a unique weak solution in W 1, 2 (). Moreover there is a constant c p >, independent of f, g, and y, such that max y(, t) L t [,T ] 2 (Ω) + y W 1, 2 () ( cp f L 2 () + g L 2 () + y ) L 2 (Ω) (3.26) for all f L 2 (), g L 2 (), and y L 2 (Ω). Theorem 3.1 Every y W (, T ) coincides, possibly after modification on a null set, with an element of C([, T ], H). In this sense we have the continuous embedding W (, T ) C([, T ], H).
7 Theorem 3.11 For all y, p W (, T ) the formula of integration by parts holds: T ( y (t), p(t) ) V,V dt = ( y(t ), p(t ) ) H ( y(), p() ) T H ( p (t), y(t) ) V,V dt. Proof (3.11) Apply the chain rule: d ( ) y(t), p(t) dt H = ( y (t), p(t) ) V,V + ( p (t), y(t) ) V,V. XXX Much detail is missing from the above equality. XXX Now integrate both sides: T d ( T y(t), p(t) dt )H dt = ( y (t), p(t) ) T V,V dt + ( p (t), y(t) ) V,V dt ( y(t ), p(t ) )H ( y(), p() ) T H = ( y (t), p(t) ) T V,V dt + ( p (t), y(t) ) V,V dt.
8 Theorem 3.12 Let y W 1, 2 () be the weak solution to problem (3.23), which exists according to Theorem 3.9. Then y belongs, possibly after a modification on a set of zero measure, to W (, T ). The proof covers the next several slides. You should recall the inner product, ( ) y, v L 2 (Ω) := y v dx. Ω
9 Proof (3.12) It follows from the problem statement, when looking for a weak solution via (3.25), that for all v W 1,1 2 () with v(t ) =, y v t dx dt = y v dx dt c y v dx dt + y v() dx + Ω α y v ds dt f v dx dt + g v ds dt. (1) In particular we may insert any function of the form v(x, t) := v(x)φ(t), where φ C (, T ) and v V = H 1 (Ω). Setting H = L 2 (Ω) and H N = H H H (N times), we find, first T T ( ) y v t dx dt = y v t dx dt = y, vt L Ω 2 (Ω) dt T ( = y, vφ (t) ) T L 2 (Ω) dt = ( y(t)φ (t), v ) L 2 (Ω) dt. Applying this technique to each term in (1),
10 Proof (3.12) contd. T ( y(t)φ (t), v ) T H dt = ( T y(t), v)h N φ(t) dt T ( T α(t)y(t), v )L 2 (Γ) φ(t) dt + ( t f (t), v )H φ(t) dt + ( c y(t), v ) φ(t) dt H ( g(t), v)l 2 (Γ) φ(t) dt. Now y L 2 (), by the definition of W 1, 2 (). Hence, by Fubini s theorem, y(, t) L 2 (Ω) for almost every t (, T ). Moreover D i y L 2 () for i = 1,, N, and thus y(, t) ( L 2 (Ω) ) N = H N for a.e. t (, T ). Finally, y(, t) H 1 (Ω), and thus y(, t) L 2 (Γ) for a.e. t (, T ). On the set of measure zero in [, T ] where one of the above statements possibly does not hold, we put y(t) =, which does not change the vector-valued function y in the sense of L 2 spaces. Hence, we see that for any fixed t, the expressions in the integrals on the right-hand side define linear functionals F i (t) : H 1 (Ω) R: F 1 (t) :v ( y(t), v ) H N F 2 (t) :v ( c (t)y(t), v ) H F 3 (t) :v ( α(t)y(t), v ) L 2 (Γ) F 4 (t) :v ( f (t), v ) H F 5 (t) :v ( g(t), v ) L 2 (Γ)
11 Proof (3.12) contd. We claim that the functionals F i (t) are bounded and thus continuous on V for every t. First we have for F 1 (t) : v ( y(t), v ) H N = Ω y v dx, F 1 (t)v = y(t) v dx y(t) v dx Ω Ω = y(t) v L 1 (Ω) y(t) L 2 (Ω) v L 2 (Ω) (2) ( y(t) L 2 (Ω) + y(t) L (Ω))( 2 v L 2 (Ω) + v ) L 2 (Ω) = y(t) H 1 (Ω) v H 1 (Ω), v H1 (Ω). (3) Line (2) is exactly Hölder s inequality, and the final equality is by definition of the H 1 (Ω) norm. Note that the function t y(t) H 1 (Ω) belongs to L2 (, T ), and y(t) H 1 (Ω) is by construction everywhere finite: y(t) H 1 (Ω) = ( Ω y(t) 2 dx ) 1/2 <. Recall, for arbitrary F V, the dual norm is defined by F v F V := sup. v V v V F (t)v (3) gives us y(t) v H 1 H 1 (Ω), for all v H1 (Ω). Thus F 1 (t) is bounded: (Ω) F 1 (t) H 1 (Ω) y(t) H 1 (Ω).
12 Proof (3.12) contd. Similarly, for F 3 (t) : v ( α(t)y(t), v ) L 2 (Γ) = Γ α(t)y(t)v ds, F 3 (t)v α(t) y(t) v ds α L () y(t) L 2 (Γ) v L 2 (Γ) Γ c α L () y(t) H 1 (Ω) v H 1 (Ω). Plenty of detail is omitted from the above line. Repeated application of Cauchy-Schwartz and/or Hölder, followed by the Trace Theorem which gives us the c multiple when going from the boundary to the space. Hence F 3 (t) is bounded: F 3 (t) H 1 (Ω) c α L () y(t) H 1 (Ω).
13 Proof (3.12) contd. These proofs are left out of the text as easy exercises for the reader: F 2 (t) : v ( c (t)y(t), v ) H = Ω c (t)y(t)v dx, F 2 (t)v c (t) y(t) v dx c L () y(t) L 2 (Ω) v L 2 (Ω) Ω so that F 2 (t) H 1 (Ω) c L () y(t) L 2 (Ω). F 4 (t) : v ( f (t), v ) H = Ω f (t)v dx F 4 (t)v f (t) v dx = F 4 (t) H 1 (Ω) f (t) L 2 (Ω). F 5 (t) : v ( g(t), v ) L 2 (Γ) = Γ g(t)v ds, F 5 (t)v g(t) v ds g(t) L 2 (Γ) v H 1 (Γ) c g(t) L 2 (Γ) v H 1 (Ω) Γ so that F 5 (t) H 1 (Ω) c g(t) L 2 (Γ). Since each F i is bounded, we know each F i (t) H 1 (Ω) for every t.
14 Proof (3.12) contd. Since each F i (t) V is bounded by some constant multiple of y(t) H 1 (Ω), f (t) L 2 (Ω), or g(t) L 2 (Γ), there is some constant c > such that 5 F i (t) V i=1 ( ) c y(t) H 1 (Ω) + f (t) L 2 (Ω) + g(t) L 2 (Γ). (3.32) Since the expression on the right-hand side belongs to L 2 (, T ), so does the expression on the left-hand side, showing that F i L 2 (, T ; V ) for each i. But then the functional F on the right-hand side of the variational formulation, being just the sum of of the F i, also belongs to L 2 (, T ; V ). Rewriting the variational formulation in terms of F, we obtain that for all v V we have the chain of inequalities ( T ) T ( ) y(t) φ (t) dt, v = y(t) φ (t), v dt L 2 (Ω) L 2 (Ω) T ( ( = F (t) φ(t), v )V T,V dt = and therefore as an equation in the space V T T y(t) φ (t) dt = F (t)φ(t) dt, φ C ) F (t) φ(t) dt, v V,V (, T ).
15 Proof (3.12) contd. T T y(t) φ (t) dt = F (t)φ(t) dt, φ C (, T ) means that y = F in the sense of vector-valued distributions; hence y L 2 (, T ; V ). We conclude that, for any y W 1, 2 () which is a weak solution of (3.23), we have y W (, T ) := {y L 2 (, T ; V ) y L 2 (, T ; V )}.
16 Theorem 3.13 The weak solution y to the problem (3.23) satisfies an estimate of the form y W (,T ) c w ( f L 2 () + g L 2 () + y L 2 (Ω)), with some constant c w > that does not depend on (f, g, y ). In other words, the mapping (f, g, y ) y defines a continuous linear operator from L 2 () L 2 () L 2 (Ω) into W (, T ) and, in particular, into C ( [, T ], L 2 (Ω) ).
17 Proof (3.13) We estimate the square of the norm y W (,T ) : y 2 W (,T ) = y 2 L 2 (,T ;H 1 (Ω)) + y 2 L 2 (,T ;H 1 (Ω) ). For the first summand, we obtain from Theorem 3.9 on page 14, with a generic constant c >, the estimate y 2 L 2 (,T ;H 1 (Ω)) = y 2 W 1, c( f 2 2 () L 2 () + g 2 L 2 () + y 2 ) L 2 (Ω). (3.33) The second summand requires only a little more effort. Indeed, with the functionals F i defined previously, we have y L 2 (,T ;H 1 (Ω) ) = 5 i=1 F i L 2 (,T ;H 1 (Ω) ) 5 F i L 2 (,T ;H 1 (Ω) ). i=1 Using the above estimates, in particular (3.32), we find, with generic constants c >, that
18 Proof (3.13) contd. T T F 1 2 L 2 (,T ;V ) = F 1 (t) 2 V dt T T F 2 2 L 2 (,T ;V ) = F 2 (t) 2 V dt T T F 3 2 L 2 (,T ;V ) = F 3 (t) 2 V dt T T F 4 2 L 2 (,T ;V ) = F 4 (t) 2 V dt f (t) 2 L 2 (Ω) dt T T F 5 2 L 2 (,T ;V ) = F 5 (t) 2 V dt c g(t) 2 L 2 (Γ) dt c y(t) 2 H 1 (Ω) dt c y 2 W 1, 2 () c 2 L () y(t) 2 L 2 (Ω) dt c y 2 W 1, 2 () c α 2 L () y(t) 2 L 2 (Ω) dt c y 2 W 1, 2 () c f 2 L 2 () c g 2 L 2 () Since, from (3.33), we have y 2 L 2 (,T ;H 1 (Ω) ) and the assertion is proved. y 2 W 1, c( f 2 () L 2 () + g L 2 () + y ) 2, L 2 (Ω) 5 F i 2 2, L 2 (,T ;H 1 (Ω) ) c( f L 2 () + g L 2 () + y L (Ω)) 2 i=1
19 Example Optimal nonstationary boundary temperature subject to and min J(y, u) := 1 y(x, T ) y Ω (x) 2 dx + λ u(x, t) 2 ds(x) dt, 2 Ω 2 y t y = in νy + αy = βu on y() = in Ω u a(x, t) u(x, t) u b (x, t) for a.e. (x, t). This is a parabolic boundary control problem with final-value cost functional.
20
21 Assumptions Assumption 3.14 Let Ω R N be a domain with Lipschitz boundary Γ, and let λ be a fixed constant. Assume that we are given functions y Ω L 2 (Ω), y L 2 (), y L 2 (), α, β L (E), and u a, u b, v a, v b L 2 (E) with u a(x, t) u b (x, t) and v a(x, t) v b (x, t) for almost every (x, t) E. Here, depending on the specific problem under study, E = or E =. E is the domain of the control, which depends on the problem.
22 Example contd. Theorems 3.12 and 3.13 guarantee a weak solution y W (, T ) for any control u L 2 (), which we represent by y = G (βu). We only need information about the solution at time T, so we define an observation operator E T : y y(t ); this is a continuous linear mapping from W (, T ) into L 2 (Ω) since the embedding W (, T ) C ( [, T ], L 2 (Ω) ) has these properties. Hence for some constant c > the bound applies: y(t ) L 2 (Ω) max y(t) L t [,T ] 2 (Ω) =: y C([,T ],L 2 (Ω)) c y W (,T ). The first inequality holds by nature of the max operator. The second inequality holds by consequence of Theorem 3.1. Hence, we have y(t ) = E T G (βu) =: Su. In summary, u y y(t ) is a continuous linear mapping S : u y(t ) from the control space L 2 () into the space L 2 (Ω) which contains y(t ).
23 Example contd. Using S we can rewrite the problem as a quadratic optimization problem in the Hilbert space U = L 2 (): where min f (u) := 1 u U ad 2 Su y Ω 2 L 2 (Ω) + λ 2 u 2 L 2 (), (3.37) U ad = {u L 2 () : u a(x, t) u(x, t) u b (x, t) for a.e. (x, t) }. The functional f is continuous in u because S and the norms are continuous. For any h [, 1], Hölder s inequality gives us hz 1 + (1 h)z 2 2 L 2 hz 1 2 L 2 + (1 h)z 2 2 L 2 h z 1 2 L 2 + (1 h) z 2 2 L 2. Setting z = Su y Ω in one case and z = u in the other and adding, we see f (u) is convex. The admissible set U ad is nonempty, closed, bounded, and convex subset of the Hilbert space L 2 (). Hence we can infer from Theorem 2.14 on page 5 the following existence result. Theorem 3.15 Suppose that Assumption 3.14 holds with E :=. Then the optimization problem (3.37) and hence the optimal nonstationary boundary temperature problem (3.1)-(3.3) has at least one optimal control u U ad. If λ > then u is unique.
24 Example Optimal nonstationary heat source min J(y, u) := 1 2 subject to y(x, t) y (x, t) 2 ds(x) dt + λ u(x, t) 2 dx dt, (3.38) 2 y t y = βu in νy = on (3.39) y() = in Ω and u a(x, t) u(x, t) u b (x, t) for a.e. (x, t). (3.4) This can be seen as an inverse problem: an unknown heat source u has to be recovered from measurements of the temperature at the boundary.
25
26 Example contd. Theorems 3.12 and 3.13 guarantee a weak solution y W (, T ) for any u L 2 (). The control-to-state operator is y = G (βu). The cost functional only requires the values at the boundary, y(x, t). Since the trace operator y y Γ maps H 1 (Ω) continuously into L 2 (Γ), the mapping E : y y defines a continuous linear operator from L 2(, T ; H 1 (Ω) ) into L 2(, T ; L 2 (Γ) ). Consequently the mapping u y y, i.e., the operator S : u y maps the control space L 2 () continuously into the space L 2(, T ; L 2 (Γ) ) = L 2 ( (, T ) Γ ) = L 2 () to which y belongs. Thus Su = E G (βu).
27 Example contd. Substituting y = Su in the objective we arrive at the quadratic minimization problem in the Hilbert space U = L 2 (): min f (u) := 1 u U ad 2 Su y 2 L 2 () + λ 2 u 2 L 2 (). (3.42) Theorem 2.14 gives us the existence of an optimal control, hence Theorem 3.16 Suppose that Assumption 3.14 holds with E =. Then the optimal nonstationary heat source problem (3.38)-(3.4) has at least one optimal control u U ad. If λ > then u is unique.
28 Necessary Optimality Conditions We will derive the necessary optimality conditions for the prior two examples. First a variational inequality will be derived that involves the state y. Then y will be eliminated by means of the adjoint state to deduce a variational inequality for the control u.
29 Adjoint Problem Consider the parabolic problem p t p + c p = a νp + αp = a (3.43) p(, T ) = a Ω with bounded and measurable coefficient functions c and α, and prescribed functions a L 2 (), a L 2 (), and a Ω L 2 (Ω). This is the adjoint problem.
30 Lemma 3.17 Using the bilinear form ( a[t; y, v] := y v + c (, t) y v ) dx + α(, t) y v ds, Ω Γ Lemma 3.17 The parabolic problem (3.43) has a unique weak solution p W 1, 2 (), which is the solution to the variational problem T p v t dx dt + a[t; p, v] dt = a Ω v(t ) dx + a v dx dt + a v dx dt, Ω v W 1,1 2 () with v(, ) =. We have p W (, T ), and there is a constant c a >, which does not depend on the given functions, such that p W (,T ) c a ( a L 2 () + a L 2 () + a Ω L 2 (Ω)).
31 Proof (3.17) Let τ [, T ], and put p(τ) := p(t τ) and ṽ(τ) := v(t τ). Then p() = p(t ), p(t ) = p(), ṽ() = v(t ), ṽ(t ) = v(), as well as ã (, t) := a (, T τ), etc., and also p v t dx dt = p ṽ τ dx dτ, and so on. Consequently, the asserted variational formulation is equivalent to the definition of the weak solution to the (forward) parabolic initial-boundary value problem p τ p + c p = ã ν p + α p = ã p() = a Ω By Theorem 3.9 there is a unique weak solution p, which by Theorem 3.12 belongs to W (, T ). The assertion now follows from reversing the time transformation.
32 Since p W (, T ) we can, in analogy to (3.35) rewrite after integration by parts the variational formulation of the adjoint equation in the following shorter form: T { } (p t, v) V,V + a[t; p, v] dt = a v dx dt + a v ds dt, p(t ) = a Ω. v L 2 (, T ; V ) (3.44) Like in the elliptic case, for the derivation for the adjoint system we need the following somewhat technical result.
33 Theorem 3.18 Let y W (, T ) be the solution to the parabolic problem y t y + c y = b v νy + αy = b u y() = b Ω w, with coefficient functions c, b L (), α, b L (), and b Ω L (Ω), and controls v L 2 (), u L 2 (), and w L 2 (Ω). Moreover, let square integrable functions a Ω, a, and a, be given, and let p W (, T ) be the weak solution to the adjoint problem, (3.43). Then we have a y dx dt + a y ds dt + a Ω y(, T ) dx Ω = b p v dx dt + b p u ds dt + b Ω p(, ) w dx. Ω
34 Proof (3.18) The variational formulation for y, using the test function p, we have T { } (y t, p) V,V + a[t; y, p] dt = b q p v dx dt + b p u ds dt, (3.45) with the initial condition y() = b Ω w. Integrating by parts we obtain T { } (p t, y) V,V + a[t; y, p] dt = (y(t ), a Ω ) L 2 (Ω) + ( b Ω w, p() ) L 2 (Ω) + b q p v dx dt + b p u ds dt. (3.47) Analogously, taking y as test function in the equation for p, we find that T { } (p t, y) V,V + a[t; p, y] dt = a y dx dt + a y ds dt, (3.46) with the final condition p(t ) = a Ω. Since thet left-hand sides of (3.47) and (3.46) coincide, the right hand sides of (3.46) and (3.47) must also be equal, which gives us a y dx dt + a y ds dt + (y(t ), a Ω ) L 2 (Ω) = b q p v dx dt + b p u ds dt + ( b Ω w, p() ) L 2 (Ω)
35 Necessary Conditions for Example 1 In this section we determine the necessary conditions for (3.1)-(3.3), the parabolic boundary control problem with final-value cost functional: min J(y, u) := 1 2 y(t ) y Ω 2 L 2 (Ω) + λ 2 u 2 L 2 (), subject to y t y = νy + αy = βu and y() = y u a u u b. Here the initial condition may be nonzero, which has been avoided previously. It is a straightforward exercise to show the previous conclusions apply also to this case. (Next slide.)
36 Problem 3.2 Prove the existence of an optimal control for (3.1)-(3.3) given an inhomogeneous initial condition y. The problem is min J(y, u) := 1 y(x, T ) y Ω (x) 2 dx + λ u(x, t) 2 ds(x) dt, 2 Ω 2 (3.1) subject to y t y = in νy + αy = βu on (3.2) y() = y in Ω and u a(x, t) u(x, t) u b (x, t) for a.e. (x, t). (3.3)
37 The arguments are identical to those leading to Theorem 3.15, and the supporting theorems and equations all allow for any y L 2 (Ω). Hence the same arguments go through nearly unmodified: Theorem 3.9 gives us that for any control, including the initial condition, there is a unique weak solution y W 1, 2 (), and by Theorem 3.12 it is in the equivalence class of a function in W (, T ). Adapting Theorem 3.13, we represent the unique solution for the zero initial condition y() = using the continuous linear operator y = G (βu), and the unique solution for homogeneous boundary conditions u = with nonzero initial condition y() = y using the continuous linear operator y = G (y ). The superposition principle for linear problems gives us a continuous linear mapping y = G (βu) + G (y ), We need only the value at final time T, so the linear continuous observation operator E T gives us: y(t ) = E T ( G (βu) + G (y ) ). which can be written as the linear continuous mapping Thus the objective is subject to S : (u, y ) y(t ). min J(y, u, y ) := 1 2 S(u, y ) y Ω 2 L 2 (Ω) + λ 2 u 2 L 2 (), U ad = {u L 2 () : u a(x, t) u(x, t) u b (x, t) for a.e. (x, t) }. Since f is convex and continuous, and U ad is nonempty, closed, bounded and convex, Theorem 2.14 gives us existence of an optimal control.
38 Adjoint problem for Example 1 Each term in the derivative of the objective w.r.t. y, y J(y, u) = y(t ) y Ω, appears in the right-hand side of the adjoint. Since y(t ) y Ω L 2 (Ω), for optimal y, the adjoint system must be p t p = in νp + αp = on p(t ) = y(t ) y Ω in Ω. (3.48) This conclusion comes from experience with previous problems. The formal Lagrange method reaches the same system, as detailed in Chapter 6. An incomplete attempt to use the formal Lagrange method follows, highlighting some of its difficulties.
39 Lagrangian We will use p (i) as our Lagrange multipliers what space they are from is as yet unknown, a difficulty only overcome in Chapter 6 in the case of elliptic problems with the statement the theory for parabolic problems is quite similar. However, for this formal approach that detail is unecessary. The Lagrangian for Example 1 is L(y, u, p) := 1 2 y(t ) y Ω 2 L 2 (Ω) + λ 2 u 2 L 2 () p (1) (y t y) dx dt } {{ } 1st constraint p (2) ( νy + αy βu) ds dt } {{ } 2nd constraint where we have left out the box constraints for now; they are applied later in a variational inequality. Similarly the initial condition is left out of this formulation, to be used later. Integrating by parts, we have (which can also be inferred from (3.25)) L(y, u, p) := 1 2 y(t ) y Ω 2 L 2 (Ω) + λ [ ] 2 u 2 L 2 () y p (1) p (1) t y dx dt [ y p (1) + αy p (2) βu p (2)] ds dt + y p (1) () dx y(t )p (1) (T ) dx Ω Ω Note that there is a cancelation between two νy terms, one multiplied by p (1) and the other by p (2). We proceed under the hope that our result will show, as it had previously, that the two p terms are equal (more precisely, p (2) is the restriction of p (1) to the boundary).
40 Lagrangian, contd. Now that we have the Lagrangian, L(y, u, p) := 1 2 y(t ) y Ω 2 L 2 (Ω) + λ [ ] 2 u 2 L 2 () y p (1) p (1) t y dx dt [ y p (1) + αy p (2) βu p (2)] ds dt + y p (1) () dx y(t )p (1) (T ) dx Ω Ω integrating the p term by parts, then taking the y derivative we have D y L(y, u, p) := p t dx dt αp ds dt + p ds dt νp ds dt Following the technique of Section 2.1, we multiply with test function v H 1 () and, after integration by parts and waving of hands, by setting equal to zero we achieve the weak formulation of the adjoint problem (see Lemma 3.17): p v t dx dt + p v dx dt + α p v ds dt = p(t )v(t ) dx p()v() dx. Ω Ω In Theorem 3.9 the assumption is made that v() =. The leap to p(t ) = y(t ) y Ω comes from pp , where also the p (1) = p (2) argument can be found. Here, we end our trip down the rabbit hole of the formal Lagrangian method and continue with the remaining material of this section.
41 Theorem 3.19 Let u U ad be a control with associated state y, and let p W (, T ) be the corresponding adjoint state that solves (3.48). Then u is an optimal control for the optimal nonstationary boundary temperature problem (3.1)-(3.3) if and only if the variational boundary inequality ( )( ) β(x, t)p(x, t) + λu(x, t) u(x, t) u(x, t) ds(x) dt (3.49) holds for all u U ad. Observe that this is where the box constraints come into play in the formal Lagrange method, and shows why they can be omitted from the Lagrangian. βp + λu is the u derivative of the Lagrangian at the point u. This variational inequality is one way of writing that the constrained problem has no direction of descent from this point.
42 Proof (3.19) Overview of the proof: We are going to use Lemma 2.21 (p.63) to show equivalence between the optimal solution and the solution of the variational inequality, then by the result of Theorem 3.18 transform the inequality into the necessary form. Lemma 2.21 tells us that, for a convex f, and nonempty convex U ad a subset of a Banach space, u solves if and only if it solves the variational inequality min f (u) := 1 u U ad 2 Su y Ω 2 L 2 (Ω) + λ 2 u 2 L 2 (2.43) () f (u)(u u), u U ad. (2.44) Since f (u) = S (Su y Ω ) + λu, the variational inequality can be written ( S (Su y Ω ) + λu, u u) L 2 (), u U ad (2.45) which is equivalently, and more conveniently, written ( Su yω, Su Su) L 2 () + λ(u, u u) L 2 (), u U ad. (2.47)
43 Proof (3.19) contd. Let S : L 2 () L 2 (Ω) be the continuous linear operator that, for the homogeneous initial condition y =, assigns to each control u the final value y(t ) of the weak solution y to the state equation. Moreover, let y = G y denote the weak solution corresponding to y and u =. Then it follows from the superposition principle for linear equations that y(t ) y Ω = Su + (G y )(T ) y Ω = Su z, where z := y Ω (G y )(T ). The above control problem then takes the form min f (u) := 1 u U ad 2 Su z 2 L 2 (Ω) + λ 2 u 2 L 2 (). The general variational inequality (2.47) yields, for all u U ad, ( Su z, S(u u) ) L 2 (Ω) + λ( u, u u ) L 2 () ( )( ) = y(t ) yω y(t ) y(t ) dx + λ u(u u) ds dt. Ω Here, we have used the identity (3.5) Su Su = Su + (G y )(T ) (G y )(T ) Su = y(t ) y(t ) and, once more, that z = y Ω (G y )(T ).
44 Proof (3.19) contd. Now put ỹ := y y and apply Theorem 3.18 with the specifications a Ω = y(t ) y Ω, a =, a =, b = β, v =, w =, y := ỹ, and ũ := u u. Also c =. Note that w = in this situation since, by definition, Su() = ; the part originating from y is incorporated into z. It follows that (in the interest of space, the first line without differentials) a y + a y + a Ω y(, T ) = b p v + b p u + b Ω p(, ) w Ω Ω (y(t ) y Ω ) y(, T ) = β p u ds dt Ω ( y(t ) yω, ỹ(t ) ) L 2 (Ω) = β p ũ ds dt. Substituting this result into inequality (3.5), we find that ( )( ) y(t ) yω y(t ) y(t ) dx + λ u(u u) ds dt Ω = β p(u u) ds dt + λ u(u u) ds dt = (β p + λu)(u u) ds dt, which concludes the proof of the assertion.
45 Next, we employ the method described on page 69 for elliptic problems to derive a number of results converning the possible form of optimal controls. This is a pointwise view of the control, in contrast to the L 2 view in Theorem Theorem 3.2 A control u U ad with associated state y is optimal for the problem (3.1)-(3.3) if and only if it satisfies, together with the adjoint state p from (3.48), the following conditions for almost all (x, t) : the weak minimum principle ( β(x, t)p(x, t) + λu(x, t) )( v u(x, t) ), v [ ua(x, t), u b (x, t) ], (3.51) the minimum principle β(x, t)p(x, t)u(x, t) + λ { 2 u(x, t)2 = min β(x, t)p(x, t)v + λ v [u a(x,t),u b (x,t)] 2 v 2}, (3.52) and, in the case of λ >, the projection formula { u(x, t) = P [ua(x,t),ub (x,t)] 1 } λ β(x, t)p(x, t). (3.53) For the proof of this assertion one takes, starting from the variational inequality (3.49), the same series of steps that led from Theorem 2.25 to Theorem 2.27 in the elliptic case.
46 Proof/Discussion (3.2) Theorem 3.19, like Theorem 2.25, shows that, for control u, associated state y, and weak solution p to the adjoint equation, satisfying the variational inequality, e.g., (3.49), is equivalent to optimality. Rewrite (3.49) in the form ( ) β(x, t)p(x, t) + λu(x, t) u(x, t) ds(x) dt }{{} ( ) β(x, t)p(x, t) + λu(x, t) u(x, t) }{{} ds(x) dt, u U ad. The difference between the LHS and RHS has been marked for clarity. Hence ( ) β(x, t)p(x, t) + λu(x, t) u(x, t) ds(x) dt }{{} = min u U ad ( ) β(x, t)p(x, t) + λu(x, t) u(x, t) ds(x) dt, }{{} and we can conclude that, assumping the expression inside the first parentheses is known, we obtain u as the solution to a linear optmization problem in a function space. This observation forms the basis of the conditioned gradient method, discussed in the various sections on numerical methods.
47 Proof/Discussion (3.2) contd. It is intuitively clear... We present a lemma providing insight in the direction of a pointwise form of solution. It is an exact parallel to the elliptic case s Lemma Lemma 22 (c.f. Lemma 2.26) A necessary and sufficient condition for the variational inequality (3.49) to be satisfied is that for almost every (x, t), u a(x, t) if β(x, t)p(x, t) + λu(x, t) > u(x, t) = [u a(x, t), u b (x, t)] if β(x, t)p(x, t) + λu(x, t) = (4) u b (x, t) if β(x, t)p(x, t) + λu(x, t) <. An equivalent condition is given by the pointwise variational inequality in R, ( β(x, t)p(x, t)+λu(x, t) )( v u(x, t) ), v [ua(x, t), u b (x, t)], for a.e. (x, t). (5) The proof of Lemma 2.26 goes through for this one, solely with the change of x to (x, t) and Ω to. It shows that (3.49) (4) (5) (3.49). The most illuminating portion of that proof is the first implication, adapted on the next slide.
48 Proof (Lemma 22) Let u, u a, and u b be arbitrary but fixed representatives of the corresponding equivalence classes in L (). Suppose (4) is false (but the constraints are obeyed almost everywhere), and consider the measurable sets A +(u) = {(x, t) : β(x, t)p(x, t) + λu(x, t) > }, A (u) = {(x, t) : β(x, t)p(x, t) + λu(x, t) < }. By our assumption, there is a set E + A +(u) having positive measure such that u(x, t) > u a(x, t) for all (x, t) E +, or there is a set E A (u) having positive measure such that u(x, t) < u b (x, t) for all (x, t) E. In the first case, we define the function u U ad, { u a(x) for (x, t) E + u(x, t) = u(x, t) for (x, t) \ E +. Then ( β(x, t)p(x, t) + λu(x, t) )( u(x, t) u(x, t) ) dx ( )( = β(x, t)p(x, t) + λu(x, t) ua(x, t) u(x, t) ) dx <, E + since the first factor is positive on E + and the second is negative. This contradicts (3.49). The other case is handled similarly. We end our incomplete proof of the lemma here.
49 Proof/Discussion (3.2) contd. By a rearrangement of terms the pointwise variational inequality (5) can be rewritten ( β(x, t)p(x, t) + λu(x, t) ) u(x, t) ( β(x, t)p(x, t) + λu(x, t) ) v, v [ua(x, t), u b (x, t)], (6) for almost every (x, t). Here v R, it is not a function. We can now complete our proof of Theorem 3.2. Proof (Theorem 3.2) The weak minimum principle is nothing but a reformulation of (6). The minimum principle is easily verified: a real number v solves for fixed (x, t) the convex quadratic optimization problem in R, min g(v) := β(x, t)p(x, t)v + λ v [u a(x,t),u b (x,t)] 2 v 2, (7) if and only if the variational inequality g (v)(v v), v [u a(x, t), u b (x, t)] is satisfied, that is, if ( β(x, t)p(x, t) + λv ) (v v), v [ua(x, t), u b (x, t)]. The minimum condition follows from taking v = u(x, t). The solution to the quadratic optimization problem (7) in R is given by the projection formula (3.53).
50 Conclusion of Necessary Conditions for Example 1 In the λ > case, the triple (u, y, p) satisfies the optimality system y t y = p t p = νy + αy = βu νp + αp = (3.54) y() = y { u = P [ua,ub ] 1 } λ β p. p(t ) = y(t ) y Ω If λ = then the projection formula has to be replaced by: u a(x, t) if β(x, t)p(x, t) > u(x, t) = [u a(x, t), u b (x, t)] if β(x, t)p(x, t) = u b (x, t) if β(x, t)p(x, t) <.
51 Special Case: u a =, u b = In the case there are no control constraints, the projection formula yields u = λ 1 βp. Hence u can be eliminated from the state equation, and we obtain the following forward-backward system of two parabolic problems for the unknown functions y and p: y t y = p t p = νy + αy = β 2 λ 1 p νp + αp = (3.55) y() = y p(t ) = y(t ) y Ω The solution of such systems is not an easy task. (See page 162 for brief discussion of methods.)
52 Optimal nonstationary heat source We recall problem (3.38)-(3.4), which in shortened form reads subject to u U ad and to the state system min J(y, u) := 1 2 y y 2 L 2 () + λ 2 u 2 L 2 (), y t y = βu νy = y() = Invoking the operator G : L 2 () W (, T ) introduced in (3.36), we can express the solution y to the state system in the form y = G (βu). The cost functional involves the observation y. The observation operator E : y y is a continuous linear mapping from W (, T ) into L 2(, T ; L 2 (Γ) ) = L 2 (), which entail that the control-to-observation operator S : u y defined in (3.41) is continuous from L 2 () into L 2 (). Hence, the problem is equivalent to the problem min u Uad f (u), where f is the reduced functional introduced in (3.42).
53 As in (3.5), we obtain the following as the necessary optimality condition for u: (Su y, Su Su) L 2 () + λ(u, u u) L 2 () u U ad, that is, upon substituting y = Su and y = Su, (y y )(y y) ds dt + λ u(u u) dx dt u U ad. (3.56) It is evident how the adjoint state p must be defined, namely as the weak solution to the parabolic problem p t p = νp = y y p(t ) = By virtue of Lemma 3.17, it has a unique weak solution p.
54 Theorem 3.21 A control u U ad is optimal for the optimal nonstationary heat source problem (3.38)-(3.4) if and only if it satisfies, together with the adjoint state p defined above, the variational inequality (βp + λu) dx dt u U ad. Proof (3.21) This assertion is again a direct consequence of Theorem 3.18, with the specifications a = y y, a Ω =, a =, b = β, b =, and b =. The steps are similar to those in the proof of Theorem 3.2. As in the previous section, the variational inequality just proved can be transformed into an equivalent pointwise minimum principle for u or, if λ >, into a projection formula. In particular, if λ >, u(x, t) = P [ua(x,t),u b (x,t)] { 1 λ β(x, t)p(x, t) } for a.e. (x, t).
55 Overview of Numerical Methods In the following slides we sketch the projected gradient method for the optimal nonstationary boundary temperature problem (the sketch in the text spans 5 pages). Here we briefly mention other published methods which are computationally more efficient: Multigrid methods (for the unconstrained case) In 1- and 2-dimensional problems, many methods for solving (3.55) do not require multigrid Primal-dual active set In each iteration we update the active set for the upper and lower constraints using λ 1 f (u), as in section Then solve the unconstrained forward-backward problem for the remaining values Direct solution of the optimality system Substitute u = P [ua,u b ]{ λ 1 βp} in the state equation The system is differentiable except at u a and u b Good results have been achieved in solving the nonsmooth system Discretize then optimize Fully discretize the parabolic problem and the cost functional Use existing solvers for finite-dimensional quadratic optimization problems
56 Projected Gradient Method As before, let S : u y(t ) for y =, and let ŷ be the solution to the homogeneous problem with nonzero initial condition y, so that y(x, T ) = (Su)(x) + ŷ(x, T ). In this way the problem becomes a quadratic Hilbert space optimization problem, min f (u) := 1 u U ad 2 Su + ŷ(t ) y Ω 2 L 2 (Ω) + λ 2 u 2 L 2 (). We follow the algorithm in iterations, starting with an initial guess u, which generates y, then p, then u 1, y 1, p 1, u 2, y 2, p 2, etc., each in turn. The y n and p n iterates are (separately) the solution of the now familiar state and adjoint equations, y t y = p t p = νy + αy = βu νp + αp = y() = y p(t ) = y n(t ) y Ω. The derivative at an iterate u n is f ( (u n)v = β(x, t)pn(x, t) + λu n(x, t) ) v(x, t) ds dt, By the Riesz representation theorem, we obtain the usual representation of the reduced gradient, f (u n) = βp n + λu n.
57 Projected Gradient Method contd. The Algorithm The algorithm proceeds as follows, suppose that iterates u 1,..., u n have been determined. S1 (New State) Using u n solve the state equation for y n. S2 (New descent direction) Using y n solve the adjoint equation for p n. Take as descent direction the negative gradient v n = f (u n) = (βp n + λu n). S3 (Step size control) Determine the optimal step size s n by solving f ( }) P [ua,ub ]{ un + s nv n = min f ( { }) P [ua,u s> b ] un + s nv n. S4 Put u n+1 := P [ua,u b ]{ un + s nv n }, n := n + 1, GO TO S1. The method is completely analogous to that for the elliptic case. The projection step is necessary because u n + s nv n may not be admissible. Although the method converges only slowly, it is easy to implement and thus very suitable for numerical tests. If we need to calculate the solution many times for different data problem many times for different data, reducing the problem can save considerable calculation. Section gives an example of this.
58 Bibliography Bibliography Optimal Control of Partial Differential Equations; Theory, Methods and Applications. Fredi Trölzsch. Graduate Studies in Mathematics, Volume 112. American Mathematical Society.
REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS
REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,
More informationWeak Formulation of Elliptic BVP s
Weak Formulation of Elliptic BVP s There are a large number of problems of physical interest that can be formulated in the abstract setting in which the Lax-Milgram lemma is applied to an equation expressed
More informationExistence Theory: Green s Functions
Chapter 5 Existence Theory: Green s Functions In this chapter we describe a method for constructing a Green s Function The method outlined is formal (not rigorous) When we find a solution to a PDE by constructing
More informationThe Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:
Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply
More informationFunctional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32
Functional Analysis Martin Brokate Contents 1 Normed Spaces 2 2 Hilbert Spaces 2 3 The Principle of Uniform Boundedness 32 4 Extension, Reflexivity, Separation 37 5 Compact subsets of C and L p 46 6 Weak
More informationAn optimal control problem for a parabolic PDE with control constraints
An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)
More informationSECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS
1 SECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS fredi tröltzsch and daniel wachsmuth 1 Abstract. In this paper sufficient optimality conditions are established
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationNecessary conditions for convergence rates of regularizations of optimal control problems
Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian
More informationKonstantinos Chrysafinos 1 and L. Steven Hou Introduction
Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher ANALYSIS AND APPROXIMATIONS OF THE EVOLUTIONARY STOKES EQUATIONS WITH INHOMOGENEOUS
More informationSolving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics
Excerpt from the Proceedings of the COMSOL Conference 2009 Milan Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Fikriye Yılmaz 1, Bülent Karasözen
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationControllability of linear PDEs (I): The wave equation
Controllability of linear PDEs (I): The wave equation M. González-Burgos IMUS, Universidad de Sevilla Doc Course, Course 2, Sevilla, 2018 Contents 1 Introduction. Statement of the problem 2 Distributed
More informationON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM
ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,
More informationContents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.
Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological
More informationS chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.
Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable
More informationTheory of PDE Homework 2
Theory of PDE Homework 2 Adrienne Sands April 18, 2017 In the following exercises we assume the coefficients of the various PDE are smooth and satisfy the uniform ellipticity condition. R n is always an
More informationHamburger Beiträge zur Angewandten Mathematik
Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Sobolev Embedding Theorems Embedding Operators and the Sobolev Embedding Theorem
More informationVariational Formulations
Chapter 2 Variational Formulations In this chapter we will derive a variational (or weak) formulation of the elliptic boundary value problem (1.4). We will discuss all fundamental theoretical results that
More informationLECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,
LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical
More informationPriority Program 1253
Deutsche Forschungsgemeinschaft Priority Program 1253 Optimization with Partial Differential Equations Klaus Deckelnick and Michael Hinze A note on the approximation of elliptic control problems with bang-bang
More information2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.
Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following
More informationFact Sheet Functional Analysis
Fact Sheet Functional Analysis Literature: Hackbusch, W.: Theorie und Numerik elliptischer Differentialgleichungen. Teubner, 986. Knabner, P., Angermann, L.: Numerik partieller Differentialgleichungen.
More informationK. Krumbiegel I. Neitzel A. Rösch
SUFFICIENT OPTIMALITY CONDITIONS FOR THE MOREAU-YOSIDA-TYPE REGULARIZATION CONCEPT APPLIED TO SEMILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH POINTWISE STATE CONSTRAINTS K. Krumbiegel I. Neitzel A. Rösch
More informationz x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.
Chapter 2 First order PDE 2.1 How and Why First order PDE appear? 2.1.1 Physical origins Conservation laws form one of the two fundamental parts of any mathematical model of Continuum Mechanics. These
More informationNormed Vector Spaces and Double Duals
Normed Vector Spaces and Double Duals Mathematics 481/525 In this note we look at a number of infinite-dimensional R-vector spaces that arise in analysis, and we consider their dual and double dual spaces
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationSUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS
SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS A. RÖSCH AND R. SIMON Abstract. An optimal control problem for an elliptic equation
More informationEquations paraboliques: comportement qualitatif
Université de Metz Master 2 Recherche de Mathématiques 2ème semestre Equations paraboliques: comportement qualitatif par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Année 25/6 1 Contents
More informationSobolev spaces. May 18
Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references
More informationChapter 4. Adjoint-state methods
Chapter 4 Adjoint-state methods As explained in section (3.4), the adjoint F of the linearized forward (modeling) operator F plays an important role in the formula of the functional gradient δj of the
More informationSPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction
MATEMATIČKI VESNIK MATEMATIQKI VESNIK 69, 1 (2017), 23 38 March 2017 research paper originalni nauqni rad FIXED POINT RESULTS FOR (ϕ, ψ)-contractions IN METRIC SPACES ENDOWED WITH A GRAPH AND APPLICATIONS
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationSPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS
SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties
More informationi=1 α i. Given an m-times continuously
1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable
More informationAdaptive methods for control problems with finite-dimensional control space
Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy
More informationMATH MEASURE THEORY AND FOURIER ANALYSIS. Contents
MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure
More informationLecture Notes on PDEs
Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential
More information(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε
1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional
More informationChapter 3 Second Order Linear Equations
Partial Differential Equations (Math 3303) A Ë@ Õæ Aë áöß @. X. @ 2015-2014 ú GA JË@ É Ë@ Chapter 3 Second Order Linear Equations Second-order partial differential equations for an known function u(x,
More informationarxiv: v1 [math.oc] 5 Jul 2017
Variational discretization of a control-constrained parabolic bang-bang optimal control problem Nikolaus von Daniels Michael Hinze arxiv:1707.01454v1 [math.oc] 5 Jul 2017 December 8, 2017 Abstract: We
More informationRIESZ BASES AND UNCONDITIONAL BASES
In this paper we give a brief introduction to adjoint operators on Hilbert spaces and a characterization of the dual space of a Hilbert space. We then introduce the notion of a Riesz basis and give some
More informationSimple Examples on Rectangular Domains
84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation
More information2 A Model, Harmonic Map, Problem
ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or
More informationOn second order sufficient optimality conditions for quasilinear elliptic boundary control problems
On second order sufficient optimality conditions for quasilinear elliptic boundary control problems Vili Dhamo Technische Universität Berlin Joint work with Eduardo Casas Workshop on PDE Constrained Optimization
More informationSecond Order Elliptic PDE
Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic
More informationElements of Positive Definite Kernel and Reproducing Kernel Hilbert Space
Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department
More informationHilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.
Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,
More informationSobolev Spaces. Chapter Hölder spaces
Chapter 2 Sobolev Spaces Sobolev spaces turn out often to be the proper setting in which to apply ideas of functional analysis to get information concerning partial differential equations. Here, we collect
More informationIE 521 Convex Optimization Homework #1 Solution
IE 521 Convex Optimization Homework #1 Solution your NAME here your NetID here February 13, 2019 Instructions. Homework is due Wednesday, February 6, at 1:00pm; no late homework accepted. Please use the
More informationPOD for Parametric PDEs and for Optimality Systems
POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More information4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan
The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,
More informationKey words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method
PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in
More informationarxiv: v1 [math.oc] 22 Sep 2016
EUIVALENCE BETWEEN MINIMAL TIME AND MINIMAL NORM CONTROL PROBLEMS FOR THE HEAT EUATION SHULIN IN AND GENGSHENG WANG arxiv:1609.06860v1 [math.oc] 22 Sep 2016 Abstract. This paper presents the equivalence
More informationBIHARMONIC WAVE MAPS INTO SPHERES
BIHARMONIC WAVE MAPS INTO SPHERES SEBASTIAN HERR, TOBIAS LAMM, AND ROLAND SCHNAUBELT Abstract. A global weak solution of the biharmonic wave map equation in the energy space for spherical targets is constructed.
More informationLocal null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain
Local null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain Nicolás Carreño Université Pierre et Marie Curie-Paris 6 UMR 7598 Laboratoire
More informationWELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)
WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS We will use the familiar Hilbert spaces H = L 2 (Ω) and V = H 1 (Ω). We consider the Cauchy problem (.1) c u = ( 2 t c )u = f L 2 ((, T ) Ω) on [, T ] Ω u() = u H
More informationDavid Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent
Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A
More informationAffine covariant Semi-smooth Newton in function space
Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationAdaptive discretization and first-order methods for nonsmooth inverse problems for PDEs
Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,
More informationSobolev Spaces. Chapter 10
Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p
More informationEuler Equations: local existence
Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u
More information************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan
************************************* Applied Analysis I - (Advanced PDE I) (Math 94, Fall 214) by Baisheng Yan Department of Mathematics Michigan State University yan@math.msu.edu Contents Chapter 1.
More informationPartial Differential Equations, 2nd Edition, L.C.Evans The Calculus of Variations
Partial Differential Equations, 2nd Edition, L.C.Evans Chapter 8 The Calculus of Variations Yung-Hsiang Huang 2018.03.25 Notation: denotes a bounded smooth, open subset of R n. All given functions are
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationUniversité de Metz. Master 2 Recherche de Mathématiques 2ème semestre. par Ralph Chill Laboratoire de Mathématiques et Applications de Metz
Université de Metz Master 2 Recherche de Mathématiques 2ème semestre Systèmes gradients par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Année 26/7 1 Contents Chapter 1. Introduction
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationSuboptimal Open-loop Control Using POD. Stefan Volkwein
Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,
More informationEXISTENCE OF REGULAR LAGRANGE MULTIPLIERS FOR A NONLINEAR ELLIPTIC OPTIMAL CONTROL PROBLEM WITH POINTWISE CONTROL-STATE CONSTRAINTS
EXISTENCE OF REGULAR LAGRANGE MULTIPLIERS FOR A NONLINEAR ELLIPTIC OPTIMAL CONTROL PROBLEM WITH POINTWISE CONTROL-STATE CONSTRAINTS A. RÖSCH AND F. TRÖLTZSCH Abstract. A class of optimal control problems
More informationIntroduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference
Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics
More informationElements of Convex Optimization Theory
Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization
More informationStationary mean-field games Diogo A. Gomes
Stationary mean-field games Diogo A. Gomes We consider is the periodic stationary MFG, { ɛ u + Du 2 2 + V (x) = g(m) + H ɛ m div(mdu) = 0, (1) where the unknowns are u : T d R, m : T d R, with m 0 and
More informationSecond order forward-backward dynamical systems for monotone inclusion problems
Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from
More informationl(y j ) = 0 for all y j (1)
Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that
More informationPartial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:
Chapter 7 Heat Equation Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: u t = ku x x, x, t > (7.1) Here k is a constant
More informationNOTES ON CALCULUS OF VARIATIONS. September 13, 2012
NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationHARMONIC ANALYSIS. Date:
HARMONIC ANALYSIS Contents. Introduction 2. Hardy-Littlewood maximal function 3. Approximation by convolution 4. Muckenhaupt weights 4.. Calderón-Zygmund decomposition 5. Fourier transform 6. BMO (bounded
More informationMath Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space
Math Tune-Up Louisiana State University August, 2008 Lectures on Partial Differential Equations and Hilbert Space 1. A linear partial differential equation of physics We begin by considering the simplest
More informationRegularity for Poisson Equation
Regularity for Poisson Equation OcMountain Daylight Time. 4, 20 Intuitively, the solution u to the Poisson equation u= f () should have better regularity than the right hand side f. In particular one expects
More informationDesign and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016
Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)
More informationCalculus of Variations. Final Examination
Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of
More informationFriedrich symmetric systems
viii CHAPTER 8 Friedrich symmetric systems In this chapter, we describe a theory due to Friedrich [13] for positive symmetric systems, which gives the existence and uniqueness of weak solutions of boundary
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationfor all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true
3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO
More informationOn some weighted fractional porous media equations
On some weighted fractional porous media equations Gabriele Grillo Politecnico di Milano September 16 th, 2015 Anacapri Joint works with M. Muratori and F. Punzo Gabriele Grillo Weighted Fractional PME
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationCHAPTER II HILBERT SPACES
CHAPTER II HILBERT SPACES 2.1 Geometry of Hilbert Spaces Definition 2.1.1. Let X be a complex linear space. An inner product on X is a function, : X X C which satisfies the following axioms : 1. y, x =
More informationThe Liapunov Method for Determining Stability (DRAFT)
44 The Liapunov Method for Determining Stability (DRAFT) 44.1 The Liapunov Method, Naively Developed In the last chapter, we discussed describing trajectories of a 2 2 autonomous system x = F(x) as level
More informationJuan Vicente Gutiérrez Santacreu Rafael Rodríguez Galván. Departamento de Matemática Aplicada I Universidad de Sevilla
Doc-Course: Partial Differential Equations: Analysis, Numerics and Control Research Unit 3: Numerical Methods for PDEs Part I: Finite Element Method: Elliptic and Parabolic Equations Juan Vicente Gutiérrez
More informationNumerical algorithms for one and two target optimal controls
Numerical algorithms for one and two target optimal controls Sung-Sik Kwon Department of Mathematics and Computer Science, North Carolina Central University 80 Fayetteville St. Durham, NC 27707 Email:
More informationLecture 17. Higher boundary regularity. April 15 th, We extend our results to include the boundary. Let u C 2 (Ω) C 0 ( Ω) be a solution of
Lecture 7 April 5 th, 004 Higher boundary regularity We extend our results to include the boundary. Higher a priori regularity upto the boundary. Let u C () C 0 ( ) be a solution of Lu = f on, u = ϕ on.
More information1 Continuity Classes C m (Ω)
0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +
More informationHille-Yosida Theorem and some Applications
Hille-Yosida Theorem and some Applications Apratim De Supervisor: Professor Gheorghe Moroșanu Submitted to: Department of Mathematics and its Applications Central European University Budapest, Hungary
More information