OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL

Size: px
Start display at page:

Download "OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL"

Transcription

1 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL O. LASS, S. TRENZ, AND S. VOLKWEIN Abstract. In the present paper the authors consider an optimal control problem for a parametrized nonlinear parabolic differential equation, which is motivated by lithium-ion battery models. A standard finite element FE discretization leads to a large-scale nonlinear optimization problem so that its numerical solution is very costly. Therefore, a reduced-order modelling based on proper orthogonal decomposition POD is applied, so that the number of degrees of freedom is reduced significantly and a fast numerical simulation of the model is possible. To control the error, an a-posteriori error estimator is realized. Numerical experiments show the efficiency of the approach. 1. Introduction We consider an optimal control problem which is governed by a semilinear parabolic partial differential equation PDE and bilateral control constraints. The PDE occurs in lithium-ion battery models see [8, 28] as an equation for the concentration of lithium-ions. This equation describes the mass transport in the positive electrode of a battery. Notice that the modeling and optimization of lithion-ion batteries has received an increasing amount of attention in the recent past. The discretization of the nonlinear optimal control problem using, e.g., finite element techniques, lead to very large discretized systems that are expensive to solve. The goal is to develop a reduced order model for the nonlinear system of PDEs that is cheap to evaluate. In this paper we apply the method of proper orthogonal decomposition POD; see, e.g., [13, 17, 25]. POD is used to generate a basis of a subspace that expresses the characteristics of the expected solution. This is in contrast to more general approximation methods, such as the finite element method, that do not correlate to the dynamics of the underlying system. We refer the reader to [3], where the authors apply a POD reduced-order modeling for the lithium ion battery model presented in [8]. When using a reduced order model in the optimization process an error is introduced. Therefore, an a posteriori error estimator has to be developed in order to quantify the quality of the obtained solution. We here use recent results from Date: November 26, Mathematics Subject Classification. 49K2, 65K1, 65K3. Key words and phrases. Optimal control, semilinear parabolic differential equations, a- posteriori analysis, proper orthogonal decomposition, projected Newton method. The first author gratefully acknowledges support by the German Science Fund Numerical and Analytical Methods for Elliptic-Parabolic Systems Appearing in the Modeling of Lithium- Ion Batteries Excellence Initiative. The second author gratefully acknowledges support by the German Science Fund A-Posteriori-POD Error Estimates for Nonlinear Optimal Control Problems governed by Partial Differential Equations. 1

2 2 O. LASS, S. TRENZ, AND S. VOLKWEIN [6, 15, 2]. Further, it is important to understand that the obtained reduced order model by the POD is only a local approximation of the nonlinear system. Hence, it is necessary to garantee that the approximation is good throughout the optimization process. For this we make use of a simplified error indicator known from the reduced basis strategies [9]. Using this error indicator an adaptive reduced order model strategy is proposed to solve the optimization problem in an efficient way. However, to obtain the state data underlying the POD reduced order model, it is necessary to solve once the full state system and consequently the POD approximations depend on the chosen parameters for this solve. To be more precise, the choice of an initial control turned out to be essential. When using an arbitrary control, the obtained accuracy was not at all satisfying even when using a huge number of basis functions whereas an optimal POD basis computed from the FE optimally controlled state led to far better results [12]. To overcome this problem different techniques for improving the POD basis have been proposed; see [2, 18]. We follow a residual based approach, which leads to an adaptive algorithm, which does not require any offline computations. The paper is organized in the following manner: In Section 2 we formulate the nonlinear, nonconvex optimal control problem. First- and second-order optimality conditions are studied in Section 3. The a-posteriori error analysis is reviewed in Section 4. Then, Section 5 is devoted to the POD method and the reduced-order modeling for the optimal control problem. In Section 6 we present the numerical solution method and the POD basis update regarding the a-posteriori error and the reduced-order residuals. Finally, numerical experiments are presented in Section The optimal control problem In this section we formulate the optimal control problem, study the semilinear parabolic state equation, prove the existence of optimal solutions and introduce a so-called reduced optimal control problem The problem formulation. Suppose that R d, d {1, 2, 3} is a bounded and open set with a Lipschitz-continuous boundary Γ =. For T > we set Q =, T and Σ =, T Γ. We set V = H 1 and H = L 2. For the definition of Sobolev spaces we refer, e.g., to [1, 7]. The space L 2, T ; V stands for the space of equivalence classes of measurable abstract functions ϕ : [, T ] V, which are square integrable, i.e., T ϕt 2 V dt <. When t is fixed, the expression ϕt stands for the function ϕt, considered as a function in only. Recall that W, T = { ϕ L 2, T ; V ϕ t L 2, T ; V } is a Hilbert space supplied with its common inner product; see [5, p ]. We define the bounded, closed and convex parameter set D ad = { µ R m µ a µ µ b}, where µ a = µ a 1,..., µ a m, µ b = µ b 1,..., µ b m R m satisfy µ a i µ b i and is interpreted componentwise.

3 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 3 Let us consider the following optimal control problem min Jy, µ = 1 T α Q y y Q 2 dxdt a + λ m κ i µ i µ i 2 2 subject to the semilinear parabolic differential equation y t y + sinh y m 2.1b µ i b i = f in Q, 2.1c y = n on Σ, 2.1d y = y in and the inequality constraint 2.1e µ D ad. Throughout we will utilize the following hypotheses. α yt y 2 dx Assumption 1. a The set of admissible parameters is given by D ad = {µ R m µ a µ µ b } with µ a µ b ; The data of the state equation satisfies b 1,..., b m L with b i almost everywhere a.e. in, f L r Q with r > d/2 + 1 and y L. b For the cost function we assume that the weighting functions α Q L Q, α L are non-negative, that the desired states satisfy y Q L Q, y L, the regularization parameters κ 1,..., κ m, λ are nonnegative scalars and the nominal parameter µ = µ i,..., µ m belongs to R m The semilinear parabolic equation. Let us introduce the nonlinear function d : R m D ad R by 2.2 dx, y, µ = sinh y m µ i b i x, x, y, µ R D ad. For fixed µ R m we consider the Nemytskii operator Φ µ : L Q L Q given by Φµ y t, x = dx, yt, x, µ for allmost all f.a.a. t, x [, T ]. It follows by the same arguments as in [26, Lemma 4.12] that the mapping Φ µ is twice continuously Fréchet-differentiable from L Q to L Q and we have Φ µ yy δ t, x = dy x, yt, x, µy δ t, x = m µ j b j x cosh yt, x m µ i b i x y δ t, x, j=1 Φ µyy δ, ỹ δ t, x = dyy x, yt, x, µy δ t, xỹt, x m 2 = µ j b j x sinh yt, x m µ i b i x y δ t, xỹ δ t, x j=1 for all y, y δ, ỹ δ L Q and f.a.a. t, x [, T ].

4 4 O. LASS, S. TRENZ, AND S. VOLKWEIN Definition 2.1. A function y W, T L Q is called a weak solution to 2.1b 2.1d provided y = y holds and 2.3 y t t, ϕ V,V + yt ϕ + d, yt, µϕ dx = ftϕ dx is satisfied for all ϕ V and f.a.a. t [, T ], where d is given by 2.2. We define the state space Y = W, T L Q endowed with the norm y Y = y W,T + y L Q. The next result follows from Theorem 5.5 in [26, p. 213] and the subsequent remark. We also refer to [4] and [24] for a detailed proof. Theorem 2.2. Let Assumption 1-a hold. Then, for any µ D ad there exists a unique weak solution y = yµ to 2.1b-2.1d. Moreover, there exists a constant C > independent of µ, f, y but dependent on µ a and µ b such that 2.4 y Y C f Lr Q + y L. Remark The proof of Theorem 2.2 relies essentially on properties of our nonlinearity 2.2. Since the b i s are essentially bounded in, the mapping d, y, µ : R is measurable and essentially bounded in for any fixed y, µ R D ad. Moreover, dx,, µ = holds, the mapping y dx, y, µ is strictly monotonically increasing and at least twice continuously differentiable i.e., d y and d yy are locally Lipschitz-continuous f.a.a. x and for all µ R m. 2 From µ a and b i in a.e. for 1 i m we infer that d y x, y, µ = m µ j b j x cosh y m µ i b i x, d yy x, y, µ = j=1 m µ j b j x j=1 2 sinh y m µ i b i x f.a.a. x and for all y, µ R D ad. Further, we have m { } d y x,, µ = µ i b i x m max µ b i b i 1 i m L =: K, and d yy x,, µ = f.a.a. x and for all µ D ad. 3 Since y L holds, we only get that the weak solution y to 2.1b- 2.1d belongs to C, T ]. If y C is fulfilled, we even have y CQ; see [4, 24]. Motivated by Theorem 2.2 we introduce the solution operator G : D ad Y, where y = Gµ is the weak solution to 2.1b-2.1d for given µ D ad. The next result is proved in the Appendix. Proposition 2.4. Assume that Assumption 1-a holds. Then, the solution operator G is globally Lipschitz-continuous.

5 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES Existence of optimal controls. Our optimal control problem is P min Jy, µ subject to y, µ FP, where we define the feasible set by FP = { y, µ X ad y is a weak solution to 2.1b-2.1d for µ } with X ad = Y D ad. Next we turn to the existence of optimal solutions to P. Theorem 2.5. Let Assumption 1 hold. Then, P has at least one global optimal solution x = ȳ, µ. Proof. Note that the cost functional is nonnegative. Moreover, D ad holds. Let µ D ad be chosen arbitrarily and ỹ = y µ the corresponding weak solution to 2.1b-2.1d. Then we have inf { Jy, µ y, u FP } Jỹ, µ <. Let {y n, µ n } n N denote a minimizing sequence in FP for the cost functional J, i.e., lim n Jy n, µ n = inf{jy, µ y, u FP}. Since D ad is bounded, there exists a bounded sequence {µ n } n N in R m. Moreover, D ad is closed. Hence, there is a subsequence, again denoted by {µ n } n N, and an element µ D ad so that 2.5 µ n µ in R m as n. Due to 2.5 we have m µ n i b i m L i b i as n. µ Thus, there exists a constant C 1 > satisfying 2.6 m µ n i b i C 1. L From 2.4 and 2.5 we infer the existence of a constant C 2 > independent of n satisfying 2.7 y n L Q C 2 f L r Q + y L. Using 2.6 and 2.7 we derive that there is a constant C 3 > such that 2.8 cosh sy n µ m n i b L i C 3 for every s [, 1]. Q We define the sequence {z n } n N by z n t, x = sinh y n t, x m µ n i b i x f.a.a. t, x Q. Applying the mean value theorem we have z n t, x = sinh y n t, x m µ n i b i x = sinh y n t, x m µ n i b i x sinh m µ n i b i x = cosh s n y n t, x m µ n i b i x y n t, x

6 6 O. LASS, S. TRENZ, AND S. VOLKWEIN f.a.a. t, x Q with s n, 1. Thus, from 2.8, 2.4 and 2.7 it follows that z n t, x C 2 C 3 f Lr Q + y L f.a.a. t, x Q. Consequently, we can assume that {z n } n N contains a subsequence, again denoted by {z n } n N, which converges weakly to an element z L r Q. Now we can use the same arguments as in the proof of Theorem 5.7 in [26] to derive the existence of an element ȳ L Q so that 2.9 y n ȳ in L Q as n. From 2.5 and 2.9 we infer that sinh y n m µ n i b i sinh ȳ m µ i b i in L Q as n. Next we consider 2.1b-2.1d in the form y n t y n = z n + f in Q, y n = n on Σ, y n = y in. Now we can proceed as in the proof of Theorem 5.7 in [26] to prove the claim. Recall that in our case the control variable µ belongs to a finite-dimensional set The reduced control problem. Using the parameter-to-state operator G : D ad Y we define the reduced cost functional as Ĵµ = JGµ, µ for µ D ad. Then, P is equivalent to the reduced problem ˆP min Ĵµ s.t. µ D ad. It follows from Theorem 2.5 that ˆP possesses at least a global solution µ D ad. Moreover, the pair x = G µ, µ is a local solution to P. 3. First- and second-order optimality conditions To characterize a local optimal solution to P or ˆP we derive optimality conditions. Our approach is based on the formal Lagrange technique as it is described in [26]. Let us introduce the Lagrange function L associated with P considering the weak formulation 2.3 Ly, µ, p = 1 T α Q y y Q 2 dxdt + 1 α yt y 2 dx λ m T κ i µ i µ i 2 + y t t, pt 2 V,V dt + T yt pt + d, yt, µ ft pt dxdt for y, µ, p Y R m L 2, T ; V, where the nonlinearity d has been introduced in 2.2. Notice that the initial condition and the inequality constraints for µ are not eliminated by introducing corresponding Lagrange multipliers.

7 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES First-order necessary optimality conditions. Let Assumption 1 be satisfied. Suppose that µ D ad is a local optimal solution to ˆP and ȳ = G µ the associated optimal state. To derive first-order optimality conditions we have to differentiate the reduced cost functional Ĵ with respect to the parameter µ. Hence, we must compute the derivative of the mapping G. Proposition 2.4 implies the following proposition. For a proof we refer the reader to the Appendix. Proposition 3.1. Let Assumption 1-a hold. Then, the operator G is Fréchet differentiable. Let ỹ = G µ and y = G µµ for µ D ad and µ R m. Then, y Y is the weak solution to the linear parabolic problem 3.1 y t y + d y, ỹ, µy = d µ, ỹ, µµ in Q, y = n on Σ, y = in, where d y is given in Remark and the row vector d µ, ỹ, µ has the components d µi x, ỹt, x, µ = ỹt, x cosh ỹt, x m µ j b j x b i x for 1 i m and f.a.a. t, x Q. Furthermore, 3.2 y Y C µ 2 with a constant C > depending in T,, ỹ, µ, m and the b i s. In 3.2 we denote by 2 the Euclidean norm. j=1 Remark 3.2. It follows from 3.2 that the linear operator G µ is bounded. Next we consider the following first-order conditions see, e.g., [11, 26]. For that purpose we study 3.3a 3.3b L y ȳ, µ, py = for all y Y with y =, L µ ȳ, µ, pµ µ for all µ D ad. The Fréchet derivative L y ȳ, µ, p in a direction y Y with y = is given by T ȳ ȳt L y ȳ, µ, py = α Q yq y dxdt + α y yt dx 3.4 T + y t, p V,V + y p + d y, ȳ, µy p dx dt. From 3.3a and 3.4 we infer the adjoint or dual equations for p, here written in its strong form: 3.5 p t p + d y, ȳ, µ p = α Q ȳ y Q in Q, p = on Σ, n pt =α y ȳt in. Proposition 3.3. Let Assumption 1 be satisfied. Suppose that µ D ad is a local optimal solution to ˆP with associated optimal state ȳ = G µ. Then, there exists

8 8 O. LASS, S. TRENZ, AND S. VOLKWEIN a unique Lagrange multiplier p Y satisfying pt = α y ȳt in H and p t t, ϕ V,V + pt ϕ + d y, ȳ, µ ptϕ dx = α Q t y Q t ȳt dx for all ϕ V and f.a.a. t [, T ], where d y is given in Remark Moreover, there exists a constant Ĉ > with 3.6 p Y Ĉ α Q ȳ y Q L Q + α ȳt y L. Proof. For µ D ad and ȳ = G µ the function d y x, ȳt, x, µ = m µ j b j x cosh ȳt, x m µ i b i x, t, x Q, j=1 is essentially bounded and nonnegative see Remark 2.3. Thus, the proof follows from Theorem 5.5 in [26, p. 213]. The Fréchet-derivative L µ ȳ, µ, p in a direction µ R m has the form m T 3.7 L µ ȳ, µ, pµ = λκ i µ i µ i + d µi, ȳ, µ p dxdt µ i. Combining 3.3b and 3.7 we derive the variational inequality m T 3.8 λκ i µ i µ i + d µi, ȳ, µ p dxdt µ i µ i for all µ = µ 1,..., µ m D ad. Summarizing, we infer by standard arguments the following result; see, e.g., [11, 26] for more details. Theorem 3.4. Let Assumption 1 hold. Suppose that µ D ad is a local optimal solution to ˆP with associated optimal state ȳ = G µ. Let p denote the associated Lagrange multiplier introduced in Proposition 3.3. Then, first-order necessary optimality conditions for ˆP are given by the variational inequality 3.8. Remark 3.5. Since L y ȳ, µ, µ : Y R is linear and bounded, we write L y ȳ, µ, py = L y ȳ, µ, p, y Y,Y for y Y. Analogously, L µ ȳ, µ, p : R m R is linear and bounded. Therefore, the derivative L µ ȳ, µ, p can be interpreted as a row vector with the components T L µi ȳ, µ, p = λκ i µ i µ i + d µi, ȳ, µ p dxdt. Let us define the column vector µ Lȳ, µ, p = L µ ȳ, µ, p as the gradient on L with respect to µ. We can characterize the gradient of the reduced cost functional; see Section 2.4. It follows by standard arguments [11] that the derivative Ĵ µ of the reduced cost functional Ĵ at a given µ D ad is given by the row vector Ĵ µ = Ĵ µ1 µ,..., Ĵµ m µ

9 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 9 with the components Ĵ µi µ = λκ i µ i µ i + T where y = Gµ holds and p is the weak solution to 3.9 d µi, y, µp dxdt, i = 1,..., m, p t p + d y y, µp = α Q y y Q in Q, p = on Σ, n pt = α yt y in. In the sequel we denote by the column vector Ĵµ the gradient of Ĵ at µ Second-order sufficient optimality conditions. In this section we turn to second-order optimality conditions. For that purpose we make use of the following result, which is proved in the Appendix. Proposition 3.6. If Assumption 1-a holds, the mapping G is twice continuously Fréchet-differentiable on D ad. In particular, for µ D ad the function satisfies the linear parabolic problem z = G µµ 1, µ 2, µ 1, µ 2 R m, z t z + d y, y, µz = d yy, y, µy 1 y 2 in Q, z = n on Σ, z = in, where d y, d yy are given in Remark and the directions y i, i {1, 2}, are the weak solutions to the linear parabolic problems y i t y i + d y, y, µy i = d µ y, µµ i in Q, y i = n on Σ, y i = in. Let µ D ad be a local optimal solution to ˆP. By ȳ, p Y we denote the associated state and adjoint variable, respectively. For a given τ > the set of active constraints at µ is defined as A τ = { i {1,..., m} : λκ i µ i µ i + T d µi, ȳ, µ p dxdt > τ Further, the critical cone C τ µ is the set of all µ R m satisfying for i {1,..., m} = if i A τ µ, µ i if µ i = µ a i and i A τ µ, if µ i = µ b i and i A τ µ. Notice that the Lagrangian is twice continuously Fréchet-differentiable. By L xx we denote the second derivative with respect to the pair x = y, µ. Then, a secondorder sufficient optimality condition is formulated in the next theorem. }.

10 1 O. LASS, S. TRENZ, AND S. VOLKWEIN Theorem 3.7. Let Assumption 1 hold. Suppose that µ D ad is a local solution to ˆP and ȳ = G µ denotes the associated optimal state. Let p Y be the corresponding Lagrange multiplier introduced in Proposition 3.3. Assume that there exist scalars τ > and δ > satisfying the second-order condition 3.1 L xx ȳ, µ, px, x δ µ 2 2 for all x = y, µ W, T C τ µ with y t y + d y, ȳ, µy = d µ, ȳ, µµ in Q, Then, there are scalars ε >, σ > such that y = n on Σ, y = in. Ĵµ Ĵ µ + σ µ µ 2 2 for all µ D ad with µ µ 2 ε. Hence, µ is a strict local minimizer. For a proof of Theorem 3.7 we refer the reader to [26, Theorem 5.17]. To give sufficient conditions for 3.1 we derive the second derivative of the Lagrangian: T L yy ȳ, µ, py, ỹ = αq + d yy, ȳ, µ p yỹ dxdt + α yt ỹt dx, L µµ ȳ, µ, pµ, µ = µ λdiag κ 1,..., κ m + L yµ ȳ, µ, py, µ = T T d µµ, ȳ, µ p dxdt µ, dyµ, ȳ, µy µ p dxdt = L µy ȳ, µ, pµ, y, where the symmetric Hessian matrix d µµ, ȳ, µ is an m m matrix given by dµµ, ȳ, µ = sinh ȳ m µ ij l b l ȳ 2 b i b j, 1 i, j m l=1 and the mixed derivative reads dyµ, ȳ, µy m µ = dyµi, ȳ, µy µ i m m = ȳ µ l b l sinh ȳ m µ j b j + cosh ȳ m µ j b j b i y µ i. l=1 j=1 We set η = λ min 1 i m κ i and suppose that κ > holds. Using ȳ, µ L Q D ad we define j=1 C 1 = Q d yy, ȳ, µ L Q <, C 2 = 2 Q m d yµi, ȳ, µ 2 L Q <, C 3 = Q max Q d µµ, ȳ, µ 2 <,

11 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 11 where Q stands for the finite Lebesgue measure of Q and 2 denotes the spectral norm for symmetric matrices. Recall that α Q and α are nonnegative. Moreover, T ϕψ dxdt ϕ L2 Q ψ L 2 Q Q ϕ L Q ψ L Q Q ϕ Y ψ Y for ϕ, ψ Y. Thus, applying 3.2 and 3.6 we infer that L xx ȳ, µ, px, x = L yy ȳ, µ, py, y + 2L yµ ȳ, µ, py, µ + L µµ ȳ, µ, pµ, µ T η µ dyy, ȳ, µy d yµ, ȳ, µy µ p dxdt T + µ d µµ, ȳ, µµ p dxdt η µ 2 2 C 1 y 2 Y p Y C 2 µ 2 y Y p Y C 3 µ 2 2 p Y η C 4 αq ȳ y Q L Q + α ȳt y L µ 2 2 for all y, µ W, T R m satisfying 3.1, where we put C 4 = C C2 1 + C 2 C + C 3 Ĉ. Thus, 3.1 holds provided α Q ȳ y Q L Q + α ȳt y L < η. 2C 4 Summarizing, we have shown the following result. Proposition 3.8. Let Assumption 1 hold. Further, the regularization parameters λ, κ 1,..., κ m are positive. Suppose that µ D ad is a local solution to ˆP and ȳ = G µ denotes the associated optimal state. Let p Y be the corresponding Lagrange multiplier introduced in Proposition 3.3. If the residuum α Q ȳ y Q L Q + α ȳt y L is sufficiently small, 3.1 is satisfied, i.e., µ is a strict local minimizer for ˆP Representation of the Hessian. Next we derive an expression for the Hessian Ĵ µ R m m for an arbitrary µ D ad ; see [11], for instance. We have the identity Ĵµ = JGµ, µ = LGµ, µ, p for p L 2, T ; V. Differentiating Ĵ in a direction µ1 R m yields Ĵµ, µ1 R m = Ĵ µµ 1 = L y Gµ, µ, pg µµ 1 + L µ Gµ, µ, pµ 1 For µ 1, µ 2 R m we find: = L y Gµ, µ, p, G µµ 1 Y,Y + µlgµ, µ, pµ 1 R m. Ĵ µµ 1, µ 2 R m = L y Gµ, µ, p, G µµ 1, µ 2 Y,Y + L yy Gµ, µ, pg µµ 1, G µµ 2 Y,Y + L yµ Gµ, µ, pµ 1, G µµ 2 Y,Y + L µy Gµ, µ, pg µµ 1, µ 2 R m + L µµ Gµ, µ, pµ 1, µ 2 R m. Now we choose for p L 2, T, V the solution pµ to the adjoint equation 3.5, i.e., for p = pµ holds L y Gµ, µ, pµ = in Y.

12 12 O. LASS, S. TRENZ, AND S. VOLKWEIN Hence, the term containing G µ drops out and by rearranging the dual pairings we obtain Ĵ µµ 1, µ 2 R m = G µ L yy Gµ, µ, pµg µ + G µ L yµ Gµ, µ, pµ µ 1, µ 2 R m + L µy Gµ, µ, pµg µ + L µµ Gµ, µ, pµ µ 1, µ 2 R m, where G µ : Y R m R m denotes the dual operator of Gµ satisfying G µ r, µ R m = r, G µµ Y,Y for all r Y and µ R m. Consequently, the second derivative of the reduced cost function Ĵ as follows: 3.11 Ĵ µ =G µ L yy yµ, µ, pµg µ + G µ L yµ yµ, µ, pµ + L µy yµ, µ, pµg µ + L µµ yµ, µ, pµ. This can be formulated in the following way: 3.12 Ĵ µ = T µ L xx Gµ, µ, pµt µ, x = y, µ, with the operator the dual operator T µ = and the second derivative L xx = T µ = G µ LR m, Y R m, I G µ = G I µ I LY R m, R m, Lyy L yµ LY R m, Y R m. L µy L µµ Here, LR m, Y R m denotes the Banach space of all linear and bounded operators from Y R m to R m endowed with the usual operator norm, the mapping I LR m is the identity in R m. Throughout the paper we utilize the notation Ĵ µµ 1, µ 2 = Ĵ µµ 1, µ 2 R m = µ 2 Ĵ µµ 1, µ 1, µ 2 R m, for the Hessian of the reduced cost functional. Remark 3.9. By this approach we do not use the Hessian representation 3.12 to set up the Hessian matrix explicitly. In fact we compute just the effect of the operator Ĵ ˆµ at ˆµ D ad on a direction µ R m by applying the operator components of 3.12 consecutively. Therefore, for given µ R m, we have to solve the linearized state equations 3.13 m y t y + ˆµ i b i cosh ŷ m j=1 ˆµ j b j y = ŷ cosh ŷ j=1ˆµ m m j b j µ i b i in Q y n = y = on Σ with ŷ = yˆµ = Gˆµ. Let ˆp be the weak solution to 3.9 for y = ŷ and µ = ˆµ. We set ˆx = ŷ, ˆµ. In the next step, we compute Lyy ˆx, ˆpy + L L xx ˆx, ˆpy, µ = yµ ˆx, ˆpµ h1 = Y R m, L µy ˆx, ˆpy + L µµ ˆx, ˆpµ h 2 in

13 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 13 and apply the operator T ˆµ on the result, i.e. T ˆµ h1 = y h ˆµ h, I 1 R m = h 2 h 3 + h 4. 2 While for the second component h 4 of the result we obtain immediately h 4 = h 2 since I U is the identity in R m, the situation for the first component h 3 is a bit more complicated: We introduce the temporary variable h 3 = h 1 3, h 2 3 L 2, T ; V H and solve the adjoint equations h 1 3 t h 1 3+ cosh ŷ m ˆµ i b i h 1 3 m i b i ˆµ = α Q y sinh ŷ m ˆµ i b i y m ˆµ i b i sinh ŷ m ˆµ i b i ŷ m µ i b i ˆp m ˆµ i b i cosh ŷ m ˆµ i b i ŷ m µ i b i ˆp m ˆµ i b i ˆp m ˆµ i b i in Q h 1 3 n = h 1 3T = α yt on Σ in and set h 2 3 = h 1 3. Thus, we can compute the entries of h 3 in the following way: h 3i = T cosh yt m yt µ j b j bi h1 3 t dxdt for i = 1,..., m. j=1 The entries of h 4 are given as T m h 4j = yŷb j ˆµ i b i sinh ŷ m ˆµ i b i + yb j cosh ŷ m ˆp ˆµ i b i dxdt + λκ j µ j + T m ŷ 2 b i ˆµ i b i sinh ŷ m ˆµ i b i ˆp dxdt for j = 1,..., m. Finally as a result we obtain Ĵ ˆµµ = h 3 + h A-posteriori error estimates In this section we want to introduce the main idea underlying our a-posteriori error analysis for nonlinear optimal control problems: supposing that µ is an arbitrary element of the admissible parameter set D ad, our aim is now to estimate the difference µ µ 2 without knowing the optimal solution µ. The associated idea is not new and was used, for instance, in the context of error estimates for the optimal control of ODEs by Malanowski et al. [21]. An a-posteriori error estimate for linear-quadratic optimal control problems of PDEs under application of proper orthogonal decomposition as a model order reduction technique was investigated in [27] and extended to some nonlinear case in [15]. We briefly recall the basic idea: If µ µ, i.e. µ is not an optimal solution, then µ does not satisfy the necessary optimality condition 3.8. Nevertheless, there exists a function ζ = ζ 1,..., ζ m

14 14 O. LASS, S. TRENZ, AND S. VOLKWEIN R m such that m 4.1 λκ i µ i µ i + T cosh ỹ m j=1 µ j b j b i ỹ p dxdt + ζ i µ i µ i is fulfilled for all µ D ad, i.e., µ satisfies the optimality condition of a perturbed semilinear parabolic optimal control problem with perturbation ζ. In 4.1 we have ỹ = G µ and p solve the adjoint equation for the parameter µ and the associated state ỹ. The smaller ζ is, the closer is µ to the optimal parameter µ. An estimation for the distance of µ µ 2 in terms of the perturbation ζ for linear-quadratic optimal control problems is achieved in [27, Theorem 3.1], while an estimation for nonlinear problems is derived in [15, Theorem 2.5]. For the latter case, the situation is more complicated and one has to put more effort to determine a suitable estimate. This is due to the fact that some second-order information on µ is needed. Assume that there exists some constant δ > such that the coercivity condition 4.2 Ĵ µµ, µ δ µ 2 2 for all µ R m is satisfied. Then for any < δ < δ there exists a radius ρδ > such that 4.3 Ĵ µµ, µ δ µ 2 2 for all µ with µ µ 2 ρδ, for all v R m. Since we are interested in the order of the error, we follow the proposal in [15, Remark 2.3] and select δ := δ/2 and set ρ := ρδ/2. If µ belongs to this neighborhood, we can estimate the distance in the following way: 4.4 µ µ 2 2 δ ζ 2. For a proof we refer to [15]. We proceed by constructing the function ζ. Suppose that we have µ and the associated adjoint state p. The goal is to determine ζ R m satisfying the perturbed variational inequality 4.1. This is fulfilled by defining ζ in the following way [ λκ i µ i µ i + T ] d µ i, ỹ, µ p dxdt, if µ i = µ a i [, 4.5 ζ i := λκ i µ i µ i + T ] d µ i, ỹ, µ p dxdt, if µ a i < µ i < µ b i, [ λκ i µ i µ i + T ] d µ i, ỹ, µ p dxdt +, if µ i = µ b i for i = 1,..., m. Here [s] = min, s denotes the negative part function and [s] + = max, s denotes the positive part function. Next, we need an approximation of the coercivity constant δ. For this reason, the approximation of the Hessian Ĵ associated with the suboptimal parameter µ is taken into account. We have to assume that Ĵ µ is positive definite. Let σ min be the smallest eigenvalue of Ĵ µ. Then there holds µ Ĵ µµ σ min µ 2 2 for all µ R m. Hence, if the control problem behaves well around µ, the coercivity constant δ can be approximated by σ min. Assuming that σ min δ

15 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 15 holds, we can deduce that the distance of µ to the unknown locally optimal parameter µ can be estimated by 4.6 µ µ 2 2 σ min ζ 2. We will call 4.6 an a-posteriori error estimate, since, in the next section, we shall apply it to suboptimal solutions µ that have already been computed from a POD model. After having computed ũ, we determine the associated state ỹ and the Lagrange multiplier p. Then, we can determine ζ as well as its Euclidean norm and 4.6 gives an upper bound for the distance of µ to µ. In this way, the rror caused by the POD approximation can be estimated a-posteriorily. If the error is too large, then we have to improve the POD basis, e.g., by including more POD basis functions in our Galerkin ansatz. 5. The pod galerkin method Let µ D ad be chosen arbitrarily and y = Gµ. We denote by X either the Hilbert space V or H. For l N we consider the minimization problem T l min yt yt, ψ 2 i 5.1 ψ 1,..., ψ X ψi dt l X X s.t. ψ i, ψ j X = δ ij for 1 i, j l. A solution {ψ i } l to 5.1 is called POD basis of rank l. We introduce the integral operator R : X X as Rψ = T yt, ψ X yt dt for ψ X. which is a linear, compact, self-adjoint and nonnegative operator; see, e.g., [13, 1]. Hence, there exists a complete set {ψ i } X of eigenfunctions and associated eigenvalues {λ i } satisfying Rψ i = λ i ψ i for i = 1, 2,... and λ 1 λ 2... with lim i λ i = It is proved in [13, 1], for instance, that the first l eigenfunctions {ψ i } l solve 5.1 and 5.2 T yt l 2 yt, ψ i X ψ i dt = X i=l+1 holds. Suppose that for given µ D ad we have determined a POD basis {ψ i } l of rank l. We define the X-orthogonal projection operator l P l ϕ = ϕ, ψ i X ψ i for ϕ X. Then a POD Galerkin scheme for 2.1b-2.1d is given as follows: y l t = l j=1 yl j tψ j, t [, T ] a.e., solves 5.3a ytt, l ψ i V,V + y l t ψ i + d, y l t, µψ i dx = ftψ i dx λ i

16 16 O. LASS, S. TRENZ, AND S. VOLKWEIN for 1 i l and f.a.a. t [, T ] and 5.3b y l = P l y. Remark 5.1. The numerical evaluation of the nonlinear terms d, y l t, µψ i dx, 1 i l, is expensive, so that we apply the empirical interpolation method EIM [9, 22] in our numerical experiments. For an easier presentation, we do the POD Galerkin scheme without EIM. We also refer to [19] for more details. For a roof of the next proposition we refer the reader to the Appendix. Proposition 5.2. Let Assumption 1-a be satisfied. Then 5.3 has a unique solution for every µ D ad. Next we present an a-priori error estimate for the POD Galerkin scheme which is proved in the Appendix. Proposition 5.3. Suppose that Assumption 1-a holds. For µ D ad let y and y l are the solutions to 2.3 and 5.3, respectively. Then there exists a constant such that T 5.4 yt y l t 2 X dt C λ i + P l y t t y t t 2 V, i=l+1 where we take the Hilbert space X = V in 5.1. Analogously to the state equation we determine a POD basis for the adjoint equation. Let µ D ad and y = Gµ. Suppose that p is the weak solution to the adjoint equation p t p + d y y, µp = α Q y y Q in Q, p = on Σ, n pt = α yt y in. Then, for N the POD basis {φ i } of rank for the adjoint variable is the solution to T min pt pt, φ 2 i φ 1,..., φ X φi dt X X s.t. φ i, φ j X = δ ij for 1 i, j. Remark 5.4. The POD Galerkin scheme for the adjoint equation is introduced in a similar manner than for the state equation. From the property d y x, y, µ f.a.a. x and for all y, µ R D ad we can derive a-priori bounds for the solution p l to the POD Galerkin scheme for the state equation. Furthermore, the L 2, T ; X-norm of the difference p p l is bounded by the sum over the eigenvalues of the neglected eigenfunctions as well as by norms of the differences y y l and p t P p t, where P ϕ = ϕ, φ i V φ i, ϕ V, is the orthogonal projection from V onto the finite-dimensional subspace V = span {φ 1,..., φ }. For more details we refer the reader to [12, 27].

17 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES Implementations In this section we state the algorithms for solving the optimization problem 2.1 and for computing the a-posteriori error estimator The Adaptive POD-OPT algorithm. For solving the optimization problem 2.1 we implemented an adaptive optimization algorithm using POD Algorithm 1. Algorithm 1 Adaptive POD-OPT algorithm Adaptive optimization algorithm using POD Input: µ, ε, l, ε P OD, σ Output: µ, y 1: k 2: y k solve state equation for µ k using FEM 3: p k solve adjoint equation for µ k and y k using FEM 4: {ψ j } l j=1 compute POD basis from snapshots [yk, p k ] 5: J k evaluate reduced gradient for µ k 6: while J k > ε do 7: d k compute search direction using a Newton-CG method 8: µ k+1 µ k + d k 9: y k+1 solve state equation for µ k+1 using ROM 1: ρ evaluate error indicator for y k+1 11: if ρ > ε P OD then 12: y k+1 solve state equation for µ k+1 using FEM 13: p k+1 solve adjoint equation for µ k+1 and y k+1 using FEM 14: {ψ j } l j=1 compute POD basis from new snapshots [yk+1, p k+1 ] 15: end if 16: µ k+1, y k+1, {ψ j } l j=1 Algorithm 2: update control, state and POD basis if sufficient decrease condition is not fulfilled 17: p k+1 solve adjoint equation for µ k+1 and y k+1 using ROM 18: J k+1 evaluate reduced gradient for µ k+1 19: k k + 1 2: end while 21: µ µ k 22: y y k To obtain a reduced order model ROM for the state equations 2.1b 2.1d and the adjoint equations 3.5 we solve them once using a finite element method FEM and then utilize a POD Galerkin scheme. When the parameter µ is updated an error indicator ρ is evaluated for the solution y Algorithm 1, line 1. If this error indicator is too large Algorithm 1, line 11 the algorithm initiates an update of the POD basis. The same strategy is applied in Armijo-backtracking Algorithm 2. Note that for y and p a combined POD basis ψ is used. This strategy is proven to be effective since in the adjoint both variables y and p are present. The number of POD basis functions is denoted by l. As error indicator the residual can be used. The residuals are computed by inserting the solution obtained by the ROM into the original problem discretized by the FEM. This estimates how good the solution is compared to a FEM discretization. Hence a decision can be made whether to trust

18 18 O. LASS, S. TRENZ, AND S. VOLKWEIN Algorithm 2 Modified line search with Armijo-backtracking strategy Input: y k+1, u k+1, {ψ j } l j=1, µk, d k, l, ε P OD, τ, σ Output: y k+1, µ k+1, {ψ j } l j=1 1: α 1 2: while Jµ k+1 > Jµ k ασ J k, d k do 3: α α/2 4: µ k+1 µ k + αd k 5: y k+1 solve state equation for µ k+1 using ROM 6: ρ evaluate error indicator for y k+1 7: if ρ > ε P OD then 8: y k+1 solve state equation for µ k+1 using FEM 9: p k+1 solve adjoint equation for µ k+1 and y k+1 using FEM 1: {ψ j } l j=1 compute POD basis from new snapshots [yk+1, p k+1 ] 11: end if 12: end while the solution of the ROM or to recompute the solution for a given µ using FEM and update the POD basis. Since we are considering a nonlinear problem we utilize EIM [9, 22] to obtain an efficient reduced order model. In this case we can utilize a more efficient error estimator that does not require the assembing of the nonlinear system matrix. For details we refer the reader to [9, Proposition 4.1]. Note that when ever the state equation is solved also the basis for the EIM is updated. The search direction d k in line 7 is computed by using a Newton-CG method e.g. [23], p. 169, or [16], p. 3 solving 2 J k d k = J k, where 2 J k is the Hessian representation of the cost functional J at iteration k. For the implementation we make use of the fact, that as described in section 3.3, Remark 3.9 we do not need to set up the Hessian matrix explicitly, but can compute the effect of the Hessian representation applied on a vector. Algorithm 1 has the advantage that over the number of optimization iterations the POD basis is improved and taylored to the optimization problem. On the other hand, if the problem is not suited for a model order reduction method, the algorithm uses only FE solution. Hence convergence is guaranteed for the algorithm. As a remark, setting ε P OD to infinity disables the adaptive algorithm and we end up with an optimization algorithm that runs fully on the ROM The a-posteriori error estimator. In Algorithm 3 we give a short insight, how the a-posteriori error estimation is implemented. To evaluate the perturbation function ζ cp. 4.5, full system solves for the state equations and for the adjoint equations using FEM have to be done. For the computation of the smallest eigenvalue of the Hessian we will consider different numerical approaches, which will be introduced in the next section. 7. Numerical experiments Since the application of POD for order reduction of high dimensional models lacks a strict a-priori error estimation, at least an efficient a-posteriori error estimator would be appreciated. While the knowledge of rigorous a-priori error bounds

19 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 19 Algorithm 3 A-posteriori error estimator Input: µ Output: ε ape 1: y solve state equation for µ using FEM 2: p solve adjoint equation for y using FEM 3: ζ evaluate ζ concerning µ, y and p 4: σ min compute smallest eigenvalue of Ĵ µ 5: ε ape 2/σ min ζ could prevent one from performing an expensive computation step, for a-posteriori error analysis this computation step has to be already done. Especially for that reason, this error estimation has to be efficient concerning computational time and memory and, of course, accuracy. In the following we denote by CPU time the time measuring with Matlab function cputime, while SW time denotes the so called stop watch time that is provided using Matlab functions tic and toc Settings. For numerical results we implemented the semilinear parabolic optimal control problem 2.1 for the 2D spatial domain =, 1, 1 and the time interval [, T ] for T = 1. We influence the system dynamics partially by m N space and time independent control parameters µ i R, i = 1,.., m, and define µ = µ 1,..., µ m R m. The control parameters µ i are distributed to by nonnegative shape functions b 1,.., b m L. We choose the weighting parameters κ i = i, i = 1,.., m. The admissible set for the control is determined by D ad = { µ R m µ a µ µ b}. For solving the semilinear parabolic optimal control problem 2.1 numerically we applied a finite element method with standard piecewise linear ansatz functions in space and the implicit Euler scheme in time. The nonlinearity in the PDE is handled by a Newton method. We compare the optimization concerning the usage of the High-Dimensional Model HDM, the standard Reduced-Order Model ROM and the Adaptive Reduced Order Model ADAPT. While in the ROM approach a POD-EIM basis is fixed once at the beginning, the adaptive approach performs a basis update when a certain residual-based error estimation criterion is fulfilled Run 1 Validation of the error estimator. For a predication of the quality of the error estimation, we validate our approach by considering an example where the exact optimal solution is known. The optimal and therefore also desired states in the cost functional 2.1a are given by ȳt, x = y Q t, x = cos4π x 1 cos4π t x 2, y x = y Q T, x, with weightings α Q = α = 1. An insight into the characteristics of the desired state is shown for some time instances in Fig The desired control is set component-wise to i 1 µ i = sin m 1 5π for all i = 1,..., m, with regularization ν = 1 2. We set the control bounds µ a = and µ b = 1.5, aware of the fact, that for this definiton of D ad the computed control

20 2 O. LASS, S. TRENZ, AND S. VOLKWEIN Figure 7.1. Run 1: Desired state solution y Q t, x on for different time points t. cannot reach the desired µ exactly. The number of control parameters is set to m = 25. The shape functions are given by b i x 1, x 2, r i, c i, ω i = max, ri 2 x 1 c i,1 2 + x 2 c i,2 2 ω i ri 2, with radius r i R +, center point c i R 2 and a maximum value scaling ω i R for all i = 1,..., m, see Fig The right-hand side f and initial function y in the semilinear parabolic differential equation 2.1b-2.1d are set to and ft, x = 4 cos4π x 1 sin4π t x 2 x 2 16 cos4π x 1 cos4π t x t sinh y Q t, x µ i b i x 1, x 2, y x = cos4π x 1. We discretize the spatial domain by a total number of N x = 261 discretization points and consider the time domain [, T ] for T = 1 at N t = 2 discrete time points.

21 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 21 Shape functions b i bix1, x x 2 x Figure 7.2. Run 1: Equally distributed parameter control influence for m = 25 shape functions. HDM ROM ADAPT J 1.991e e e-2 #Iter Time per iter [s] SW Time [s] CPU Time [s] Table 7.1. Run 1: Performance of the three methods. We set as initial guess for the control parameter a random value µ i [, 1], for each component i = 1,..., m. This parameter control is also utilized for the initial POD-EIM basis computation parameter µ bc = µ in case of ROM and ADAPT. An overview of the performance of the three methods can be found in Tab Both methods ROM and ADAPT lead here to a total speed-up factor concerning optimization time of 3, even when the average iteration time shows a speed-up of factor 5. This is due to the fact, that initially in case of ROM and ADAPT an computational expensive FE-Snapshot solve for the state and the adjoint equation has to be done for the POD and EIM basis computation, which has to be counted additionally. Despite the fact that we can use this computation as a first iterate, it compensates partially the advantage concerning optimization time, especially when only few iterations have to be done for optimizing. As can be seen in Tab. 7.2 and in Figure 7.3, and Figure 7.4 all three methods lead to good approximations of µ. Please note here, that µ was not reachable exactly, due to the control bounds. For the additional a-posteriori error estimation in case of ROM, we compare the obtained error estimates for four different ways of smallest eigenvalue computation: 1 minlm : Determining the minimum of all eigenvalues computed by Matlab function eigs applied on the Hessian-vector computation with option Largest Magnitude.

22 22 O. LASS, S. TRENZ, AND S. VOLKWEIN µ µ h R m µ µ l R m µ µ A R m µ h µ l R m µ h µ A R m µ l µ A R m NEWT-CG 6.29e e e e e e-18 Table 7.2. Run 1: Quality of HDM µ h, ROM µ l and ADAPT µ A approximations, compared to µ. 2 Control parameter µ i µ d µ h µ l.25 Error µ d µ h µ d µ l Number i a ROM Number i b ROM Error Figure 7.3. Run 1: Comparing the solutions for µ ROM. 2 Control parameter µ i µ d µ h µ l.25 Error µ d µ h µ d µ l Number i a ADAPT Number i b ADAPT Error Figure 7.4. Run 1: Comparing the solutions for µ ADAPT.

23 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 23 SW Time [s] CPU Time [s] Solver FE State Solver FE Adjoint Perturbation ζ.5.7 Table 7.3. Run 1: Computation times. minlm SM Hess Sens SW Time [s] CPU Time [s] σ min e-2 1.e e e-2 ζ R m e e e e-5 ε ape = 2 σ min ζ R m e e e e-4 Table 7.4. Run 1: A-posteriori error estimation. 2 SM : Computing only the minimum eigenvalue by Matlab function eigs applied on the Hessian-vector computation wrapped in a linear equation solver with option Smallest Magnitude. 3 Hess : Setting up the full Hessian matrix by the Hessian-vector computation applied on a basis of D ad and using Matlab eig. 4 Sens : Setting up the full Hessian matrix via the sensitivity approach by computing the directional derivatives for a basis of D ad cf. [11] and using Matlab eig. The times for solving state and adjoint with FE and the evaluation of perturbation function ζ are given in Tab The times and results for computing the smallest eigenvalue and the a-posteriori error estimator are presented in Tab Run 2 simplified battery model. This numerical example is inspired by a simplified version of the PDE systems arising in battery equations, which was handled in [8, 28] as a model for the lithium-ion concentrations. Let usalso refer to [14], where a parameter identification problem is considered for a different battery model for the lithium-ion concentrations. The spatial discretization of is set to N x = 121 while the time domain [, T ] for T = 1 is discretized in N t = 25 time points. The number of control parameters is set to m = 3. We consider the domain to be partitioned in 3 equal sized subdomains = 1 2 3, where the control parameters µ i, i = 1,..., m, are applied by cuboid shape functions { 1 if x 1, x 2 i, b i x 1, x 2 = else. as shown in Fig In the cost functional 2.1a we set the desired control parameter vector µ = µ 1,..., µ m =

24 24 O. LASS, S. TRENZ, AND S. VOLKWEIN Domain Shape functions b i a Run 2: Partition of the domain. bix1, x x 2 b Run 2: Cuboid shape functions for µ 1 = 1, µ 2 = 2 and µ 3 = 3. x with regularization ν = 1.e 2 and weighting κ i = 1, for i = 1,..., m, while the desired states are given by and y Q t, x = cos2π x 1 cos2π t x 2 y x = cos2π x 1 cos2π T x 2, with weightings α Q = α = 1. An insight into the characteristics of the desired state solution in Q is shown for some time instances in Fig The right-hand side f and initial function y in the semilinear parabolic differential equation 2.1b-2.1d are set to ft, x = 2 cos2π x 1 sin2π t x 2 x 2 with µ f = µ f 1, µf 2, µf 3 = 4., 4.5, 5., and 4 cos2π x 1 cos2π t x t sinh y Q t, x µ f i b ix 1, x 2 y x = cos2π x 1. For the admissible parameter set D ad we fix the parameter control bounds at µ a = and µ b = 7.. We compare the optimization concerning the usage of the High-Dimensional FE-Model HDM, the standard Reduced-Order Model ROM and the Adaptive Reduced Order Model ADAPT. In the case of the standard ROM additionally the a-posteriori error estimator APE is taken into account. We apply for the smallest eigenvalue computation only the sensitivity approach Sens as introduced before, since for this setting it turned out to be the fastest without observable loss in accuracy. While in the ROM approach the POD as well as the EIM basis is fixed once at the beginning, the adaptive approach performs basis updates when a certain residual-based error estimation criterion is fulfilled.

25 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 25 Figure 7.5. Run 2: Desired state solution y Q t, x on for different time points t. We set as initial guess for the control parameters µ i =, i = 1,.., m. The snapshots for generating the initial POD and EIM basis in case of ROM and ADAPT are computed for control parameters µ bc i = 3., i = 1,.., m. The POD basis range was set to a number of 15 basis functions while for the EIM approximation of the nonlinearity up to a tolerance of 1.e 12 we get a number of 144 basis functions. The numerical optimization was done by utilizing a projected Newton-CG approach; see [16], for instance. For ensuring numerical convergence and stability in the ROM case, one has to deal with the dynamics of the hyperbolic sine and cosine, see Fig. 7.6, arising in the state solver, where the nonlinear equation is solved by a Newton method. This is handled by introducing bounds for the argument of the hyperbolic sine. As we can see in Tab. 7.5 for the performance concerning optimization time we gain a factor around 4 for ROM and a factor around 3 for ADAPT, compared to the optimization utilizing the high-dimensional model HDM.

26 26 O. LASS, S. TRENZ, AND S. VOLKWEIN 6 4 Hyperbolic sine and cosine sinhx coshx x Figure 7.6. Run 2: Dynamics of the nonlinear functions sinh and cosh. HDM ROM ADAPT J 4.548e e e-1 #Iter OPT SW Time [s] OPT CPU Time [s] APE SW Time [s] 651 APE CPU Time [s] 65 Table 7.5. Run 2: Performance of the three methods. µ f µ h R m µ f µ l R m µ f µ A R m µ h µ l R m µ h µ A R m µ l µ A R m 1.797e e e e e e-2 Table 7.6. Run 2: Quality of HDM µ h, ROM µ l and ADAPT µ A approximations, compared to µ f. All three methods lead to a good approximation of the control parameter µ f influencing the right-hand side of the semilinear parabolic PDE, see Tab Furthermore we can state, that the ROM as well as the ADAPT approach leads to good approximations of the optimal control parameters computed with HDM. As can be seen in Tab. 7.7 the a-posteriori error estimator ε ape gives a quite good and sharp upper bound concerning the norm of the distance µ h µ l that we effectively have to consider, when the HDM optimization is replaced by a faster ROM optimization. Despite the fact that we have to invest additional time after the optimization process for performing the a-posteriori error estimation, we finally end

27 OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES 27 Norm of perturbation ζ R m Smallest Eigenvalue σ min ε ape := 2 σ min ζ R m 3.678e e e-2 Table 7.7. Run 2: A-posteriori error estimator. SW Time [s] CPU Time [s] Solver FE State Solver FE Adjoint Perturbation ζ Smallest Eigenvalue σ min Table 7.8. Run 2: Computational times for the evaluation of the a-posteriori error estimator. up with a speed-up factor of 3 compared to the HDM optimization. Furthermore, in this case, we catch up with our ADAPT approach. The single computation times for the a-posteriori error estimation can be found in Tab Appendix: Proofs Proof of Proposition 2.4. Suppose that y 1 = Gµ 1 and y 2 = Gµ 2. Then y = y 1 y 2 is the weak solution to y t y + sinh y 1 m µ 1 i b i sinh y 2 m µ 2 i b i = in Q,.1 Writing sinh y 1 m µ 1 i b i sinh y 2 m µ 2 i b i y = n on Σ, y, = in. = sinh y 1 m µ 1 i b i sinh y 2 m µ 1 i b i + sinh y 2 m µ 1 i b i sinh y 2 m µ 2 i b i we apply the mean value theorem twice times sinh y 1 m µ 1 i b i sinh y 2 m µ 1 i b i = 1 m y µ 1 i b i cosh 2 + sy m µ 1 i b i dsy, sinh y 2 m µ 1 i b i sinh y 2 m µ 2 i b i = 1 y 2 cosh y 2 m µ 2 i + sµ i bi ds m µ i b i.

28 28 O. LASS, S. TRENZ, AND S. VOLKWEIN We define the functions αt, x = m µ 1 i b i x βt, x = y 2 t, x 1 1 y cosh 2 t, x + syt, x m µ 1 i b i x ds, cosh y 2 t, x m µ 2 i + sµ i bi x ds. Then, α, β L Q, α in Q a.e. and α is monotonically increasing. In particular, T β r L r Q = βt, x r dxdt = T 1 y 2 L Q T y 2 t, x cosh y 2 t, x m µ 2 i + sµ i bi x ds r dxdt cosh m max µ b i, b i 1 i m L y 2 L Q ds r dxdt. From 2.4 we infer that there is a constant C 1 > independent of y 1, y 2 and µ 1, µ 2 so that β Lr Q C 1. Summarizing, we derive from.1.2 y t y + αt, xy = βt, x m µ i b i in Q, y = n on Σ, y, = in. Due to [26, Theorem 5.5] there exists a unique solution to.2 and there is a constant C 2 > independent of α so that y 1 y 2 Y = y Y C 2 β m C 1 C 2 m µ i b i L r Q m L C 2 i b i β L µ r Q µ i b i L C 3 µ 2 = C 3 µ 1 µ 2 2 with C 3 = C 1 C 2 m b i 2 L 1/2 > independent of y 1, y 2 and µ 1, µ 2. Proof of Proposition 3.1. Let ȳ = G µ and ỹ = G µ + µ. Then the difference ỹ ȳ satisfies the parabolic problem ỹ ȳ t ỹ ȳ + sinh ỹ m µ i + µ i b i sinh ȳ m µ i b i = in Q,.3 ỹ ȳ = n on Σ, ỹ ȳ, = in. Notice that sinh ỹ m µ i + µ i b i sinh ȳ m µ i b i = sinh ỹ m µ i + µ i b i.4 sinh ỹ m µ i b i + sinh ỹ m µ i b i sinh ȳ m µ i b i.

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein Universität Konstanz Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems Laura Iapichino Stefan Trenz Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 347, Dezember 2015

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Technische Universität Berlin Martin Gubisch, Stefan Volkwein University of Konstanz March, 3 Martin Gubisch,

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

Multiobjective PDE-constrained optimization using the reduced basis method

Multiobjective PDE-constrained optimization using the reduced basis method Multiobjective PDE-constrained optimization using the reduced basis method Laura Iapichino1, Stefan Ulbrich2, Stefan Volkwein1 1 Universita t 2 Konstanz - Fachbereich Mathematik und Statistik Technischen

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

Eileen Kammann Fredi Tröltzsch Stefan Volkwein

Eileen Kammann Fredi Tröltzsch Stefan Volkwein Universität Konstanz A method of a-posteriori error estimation with application to proper orthogonal decomposition Eileen Kammann Fredi Tröltzsch Stefan Volkwein Konstanzer Schriften in Mathematik Nr.

More information

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,

More information

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33

More information

Proper Orthogonal Decomposition in PDE-Constrained Optimization

Proper Orthogonal Decomposition in PDE-Constrained Optimization Proper Orthogonal Decomposition in PDE-Constrained Optimization K. Kunisch Department of Mathematics and Computational Science University of Graz, Austria jointly with S. Volkwein Dynamic Programming Principle

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control

Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control SpezialForschungsBereich F 3 Karl Franzens Universität Graz Technische Universität Graz Medizinische Universität Graz Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic

More information

The HJB-POD approach for infinite dimensional control problems

The HJB-POD approach for infinite dimensional control problems The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

PDE Constrained Optimization selected Proofs

PDE Constrained Optimization selected Proofs PDE Constrained Optimization selected Proofs Jeff Snider jeff@snider.com George Mason University Department of Mathematical Sciences April, 214 Outline 1 Prelim 2 Thms 3.9 3.11 3 Thm 3.12 4 Thm 3.13 5

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein Universität Konstanz Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems Giulia Fabrini Laura Iapichino Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 364, Oktober 2017 ISSN

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

i=1 α i. Given an m-times continuously

i=1 α i. Given an m-times continuously 1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable

More information

K. Krumbiegel I. Neitzel A. Rösch

K. Krumbiegel I. Neitzel A. Rösch SUFFICIENT OPTIMALITY CONDITIONS FOR THE MOREAU-YOSIDA-TYPE REGULARIZATION CONCEPT APPLIED TO SEMILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH POINTWISE STATE CONSTRAINTS K. Krumbiegel I. Neitzel A. Rösch

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model nonlinear 4DVAR 4DVAR Data Assimilation of the Shallow Water Equation Model R. Ştefănescu and Ionel M. Department of Scientific Computing Florida State University Tallahassee, Florida May 23, 2013 (Florida

More information

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 34, 29, 327 338 NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS Shouchuan Hu Nikolas S. Papageorgiou

More information

Master Thesis. POD-Based Bicriterial Optimal Control of Time-Dependent Convection-Diffusion Equations with Basis Update

Master Thesis. POD-Based Bicriterial Optimal Control of Time-Dependent Convection-Diffusion Equations with Basis Update Master Thesis POD-Based Bicriterial Optimal Control of Time-Dependent Convection-Diffusion Equations with Basis Update submitted by Eugen Makarov at the Department of Mathematics and Statistics Konstanz,.6.18

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Reduced-Order Methods in PDE Constrained Optimization

Reduced-Order Methods in PDE Constrained Optimization Reduced-Order Methods in PDE Constrained Optimization M. Hinze (Hamburg), K. Kunisch (Graz), F. Tro ltzsch (Berlin) and E. Grimm, M. Gubisch, S. Rogg, A. Studinger, S. Trenz, S. Volkwein University of

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

A Posteriori Error Estimation for Reduced Order Solutions of Parametrized Parabolic Optimal Control Problems

A Posteriori Error Estimation for Reduced Order Solutions of Parametrized Parabolic Optimal Control Problems A Posteriori Error Estimation for Reduced Order Solutions of Parametrized Parabolic Optimal Control Problems Mark Kärcher 1 and Martin A. Grepl Bericht r. 361 April 013 Keywords: Optimal control, reduced

More information

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................

More information

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1 Scientific Computing WS 2017/2018 Lecture 18 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 18 Slide 1 Lecture 18 Slide 2 Weak formulation of homogeneous Dirichlet problem Search u H0 1 (Ω) (here,

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

Nonlinear stabilization via a linear observability

Nonlinear stabilization via a linear observability via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result

More information

SECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS

SECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS 1 SECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS fredi tröltzsch and daniel wachsmuth 1 Abstract. In this paper sufficient optimality conditions are established

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well

More information

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 21, 2003, 211 226 SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS Massimo Grosi Filomena Pacella S.

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

b i (x) u + c(x)u = f in Ω,

b i (x) u + c(x)u = f in Ω, SIAM J. NUMER. ANAL. Vol. 39, No. 6, pp. 1938 1953 c 2002 Society for Industrial and Applied Mathematics SUBOPTIMAL AND OPTIMAL CONVERGENCE IN MIXED FINITE ELEMENT METHODS ALAN DEMLOW Abstract. An elliptic

More information

Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics

Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Excerpt from the Proceedings of the COMSOL Conference 2009 Milan Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Fikriye Yılmaz 1, Bülent Karasözen

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017 ACM/CMS 17 Linear Analysis & Applications Fall 217 Assignment 2: PDEs and Finite Element Methods Due: 7th November 217 For this assignment the following MATLAB code will be required: Introduction http://wwwmdunloporg/cms17/assignment2zip

More information

Euler Equations: local existence

Euler Equations: local existence Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL EDUARDO CASAS, ROLAND HERZOG, AND GERD WACHSMUTH Abstract. Semilinear elliptic optimal control

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM*

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* PORTUGALIAE MATHEMATICA Vol. 53 Fasc. 3 1996 A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* C. De Coster** and P. Habets 1 Introduction The study of the Ambrosetti Prodi problem has started with the paper

More information

Model order reduction & PDE constrained optimization

Model order reduction & PDE constrained optimization 1 Model order reduction & PDE constrained optimization Fachbereich Mathematik Optimierung und Approximation, Universität Hamburg Leuven, January 8, 01 Names Afanasiev, Attwell, Burns, Cordier,Fahl, Farrell,

More information

Fitting Linear Statistical Models to Data by Least Squares: Introduction

Fitting Linear Statistical Models to Data by Least Squares: Introduction Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math

More information

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

IMPROVED LEAST-SQUARES ERROR ESTIMATES FOR SCALAR HYPERBOLIC PROBLEMS, 1

IMPROVED LEAST-SQUARES ERROR ESTIMATES FOR SCALAR HYPERBOLIC PROBLEMS, 1 Computational Methods in Applied Mathematics Vol. 1, No. 1(2001) 1 8 c Institute of Mathematics IMPROVED LEAST-SQUARES ERROR ESTIMATES FOR SCALAR HYPERBOLIC PROBLEMS, 1 P.B. BOCHEV E-mail: bochev@uta.edu

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Greedy Control. Enrique Zuazua 1 2

Greedy Control. Enrique Zuazua 1 2 Greedy Control Enrique Zuazua 1 2 DeustoTech - Bilbao, Basque Country, Spain Universidad Autónoma de Madrid, Spain Visiting Fellow of LJLL-UPMC, Paris enrique.zuazua@deusto.es http://enzuazua.net X ENAMA,

More information

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Remark 2.1. Motivation. This chapter deals with the first difficulty inherent to the incompressible Navier Stokes equations, see Remark

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Weierstraß-Institut. für Angewandte Analysis und Stochastik. Leibniz-Institut im Forschungsverbund Berlin e. V. Preprint ISSN

Weierstraß-Institut. für Angewandte Analysis und Stochastik. Leibniz-Institut im Forschungsverbund Berlin e. V. Preprint ISSN Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e. V. Preprint ISSN 2198-5855 SUPG reduced order models for convection-dominated convection-diffusion-reaction

More information

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38 Outline 1 Solution

More information