Hamburger Beiträge zur Angewandten Mathematik

Size: px
Start display at page:

Download "Hamburger Beiträge zur Angewandten Mathematik"

Transcription

1 Hamurger Beiträge zur Angewandten Mathematik POD asis updates for nonlinear PDE control Carmen Gräßle, Martin Guisch, Simone Metzdorf, Sarina Rogg, Stefan Volkwein Nr August 16

2

3 C. Gräßle, M. Guisch, S. Metzdorf, S. Rogg, and S. Volkwein POD asis updates for nonlinear PDE control Astract: In the present paper a semilinear oundary control prolem is considered. For its numerical solution proper orthogonal decomposition (POD) is applied. POD is ased on a Galerkin type discretization with asis elements created from the evolution prolem itself. In the context of optimal control this approach may suffer from the fact that the asis elements are computed from a reference trajectory containing features which are quite different from those of the optimally controlled trajectory. Therefore, different POD asis update strategies which avoids this prolem of unmodelled dynamics are compared numerically. 1 Introduction In this paper we consider an optimal control prolem governed y a semilinear paraolic equation together with control constraints. For the numerical solution we apply a Galerkin approximation, which is ased on proper orthogonal decomposition (POD), a method for deriving reduced-order models of dynamical systems; cf. Holmes et al. (1). However, to otain the state data underlying the POD model, it is necessary to solve once the full state system using a reference control. Consequently, the POD approximations depend on the chosen reference control, so that the choice of a reference control turned out to e essential for the computation of a POD asis for the optimal control prolem. To overcome this prolem we investigate two different asis update strategies for improving the POD asis: optimality-system POD (cf. Kunisch, Volkwein (8)) and trust-region POD (cf. Arian et al. ()). In oth update strategies the POD asis is changed in the optimization method in order to ensure convergence and a certain accuracy for the otained controls. Let us also refer to the papers Yue, Meerergen (13) and Qian et al. (16), where the trust-region optimization is efficiently comined with the reduced-asis method. The paper is organized as follows: In Section we introduce our semilinear control prolem and review optimality conditions. Our applied numerical methods are explained in Section 3. Section 4 is devoted to explain the application of the POD method to our optimal control prolem and in Section 5 we present our numerical experiments. Optimal control prolem Let R n, n {1,, 3}, e a ounded, open domain with Lipschitz-continuous oundary Γ =. The oundary is divided into m disjunct segments Γ i with Γ = m Γ i. The corresponding characteristic functions are denoted y χ Γi, i = 1,..., m. We consider the time interval [, T ] with T > and define the sets Q = (, T ), Σ = (, T ) Γ. The quadratic ojective has the form J(y, u) = 1 + σ u y(t, x) yd (x) dx T Γ u i (t)χ Γi (x) dxdt. (1) In (1) the desired state y d : R is assumed to e ounded and σ u > holds. The admissile controls elong to the closed, convex set U ad = { u : [, T ] R m ua u u in [, T ] }, () where u a, u : [, T ] R m are ounded functions and is understood componentwise in R m. For a given control u U ad the state y = y(t, x) satisfies the semilinear paraolic differential equation cy t (t, x) y(t, x) + y(t, x) 3 =, (t, x) Q, y m n (t, x) + qy(t, x) = u i (t)χ Γi (x), (t, x) Σ, (3) y(, x) = y (x), x. C. Gräßle: Universität Hamurg, Fachereich Mathematik, Bundesstraße 55, 146 Hamurg, Germany, carmen.graessle@unihamurg.de M. Guisch, S. Metzdorf, S. Rogg, S. Volkwein: Universität Konstanz, Fachereich Mathematik und Statistik, Universitätsstraße 1, Konstanz, Germany, {martin.guisch,simone.metzdorf,sarina.rogg,stefan.volkwein}@unikonstanz.de We suppose that the initial condition y : R is ounded and c, q are positive constants. Throughout the paper we focus on the specific nonlinearity y 3, which is utilized in our numerical tests. Of course, other nonlinearities satisfying certain oundedness and monotonicity conditions are also possile; see (Tröltzsch, 1, Chapter 5.1). In (3) the vector n = n(x) stands for the normal

4 Gräßle, Guisch, Metzdorf, Rogg & Volkwein, POD asis updates for nonlinear PDE control vector defined on Γ and y n is the normal derivative. Recall that H = L () and V = H 1 () are defined as { ( ) } 1/ H = ϕ : R ϕ H = ϕ(x) dx <, { V = ϕ H ϕ V = ( ϕ H + n ) } 1/ ϕxi H <, respectively, where ϕ xi denotes the (weak) partial derivative of ϕ with respect to i-th component x i of x. A solution to (3) is understood in the following weak sense: y(t) = y(t, ) V satisfies y() = y in and cy t (t)ϕ + y(t) ϕ + y(t) 3 ϕ dx + Γ qy(t)ϕ dx = u i (t) Γ i ϕ dx for all ϕ V, where we set Γ i = Γ i 1dx. Let us define ( ) e(y, u), (p, p ) = y() y p dx + Γ cy t (t)p(t) + y(t) p(t) + y(t) 3 p(t) dx + qy(t)p(t) dx u i (t) Γ i p(t) dx (4) for all (test) functions p : Q R with T p(t) V dt < and p H. Then, the operator equation e(y, u) = is equivalent to the fact that y = y(u) is a weak solution to (3) for given u U ad. Now the optimal control prolem is expressed as min J(y, u) s.t. e(y, u) = and u U ad, (P) where s.t. stands for suject to. Under appropriate assumptions on y we can ensure that (3) has a unique weak solution y(u) for any admissile control u U ad ; i.e., we have e(y(u), u) =. This motivates the introduction of the reduced ojective Ĵ(u) = J(y(u), u). We consider instead of (P) the reduced prolem min Ĵ(u) s.t. u U ad. (ˆP) The notion reduced expresses the fact that Ĵ depends on the control only, whereas J is dependent on the state and control. It follows from (Tröltzsch, 1, Chapter 5.3) that (ˆP) admits an optimal solution ū U ad. If ȳ = y(ū) is the unique weak solution to (3), then x = (ȳ, ū) is an optimal solution to (P). To compute a locally optimal solution we make use of the following first-order necessary optimality condition for a locally optimal solution (Tröltzsch, 1, Chapter.8): Ĵ (ū)(u ū) for all u U ad, (5) where Ĵ (ū) denotes the (Gateaux) derivative of Ĵ at ū. Following the general approach in Chapter 5.5 in Tröltzsch (1) or the one for our present optimal control prolem in Gräßle (14); Rogg (14) we derive that (5) is equivalent to the variational inequality T ( σ u ū i (t) Γ i p(t) dx ) (u i (t) ū i (t)) dt (6) for all u U ad, where p(t) = p(t, ) V is the (unique) weak solution to the dual or adjoint equation c p t (t, x) p(t, x) = 3ȳ(t, x) p(t, x), (t, x) Q, p (t, x) + q p(t, x) =, (t, x) Σ, n p(t, x) = y d (x) ȳ(t, x), x (7) and ȳ = y(ū) is solves (4) for the optimal control u = ū. The associated dual variale p is determined y p = c p(, x) for all x. Note that (ˆP) and also (P) are nonlinear (and therefore non convex) prolems. To ensure that the solution ū to (5) is a locally optimal solution to ( ˆP) we have to investigate second-order sufficient optimality conditions; see, e.g., Chapter 5.7 in Tröltzsch (1) and Section.3 in Gräßle (14). 3 Optimization methods In this section we riefly review the numerical algorithms utilized to numerically solve (P) and ( ˆP), respectively. For more details we refer to Gräßle (14); Metzdorf (15); Rogg (14). 3.1 Sequential quadratic programming We introduce the Lagrangian associated with (P) as L(y, u, p, p ) = J(y, u) + e(y, u), (p, p ), where the mapping e(y, u) has already een defined in Section. It can e shown that the Lagrangian is twice continuously (Fréchet) differentiale. The principal idea of the sequential quadratic programming (SQP) method is to solve (P) in an iterative procedure, wherey in

5 Gräßle, Guisch, Metzdorf, Rogg & Volkwein, POD asis updates for nonlinear PDE control 3 each SQP iteration k a linear-quadratic SQP suprolem has to e solved. This SQP suprolem is otained y a quadratic approximation of the Lagrangian L and a linearization of the equality constraint at the current iterates x k = (y k, u k ) and z k = (p k, p k ). For revity, we set L k = L(x k, z k ) and e k = e(x k ). The linear-quadratic SQP suprolem is given as min L k + L k xx k x k =(yk,uk ) + 1 Lk xx(x k, xk ) s.t. e k + e k xx k = and uk + u k U ad. By x we denote the partial (Fréchet) derivative with respect to the argument x = (y, u). Prolem (8) is welldefined if second-order sufficient optimality conditions hold at the point (x k, z k ). If x is determined, the new SQP iterate is x k+1 = x k + x k. For the Lagrange multiplier z k the update is given in Algorithm 1. It can e shown that the SQP method is locally equivalent to the Newton method, which consists in finding a stationary point of the Lagrangian. Hence, the rate of convergence is locally quadratic, provided that the second-order sufficient optimality condition holds; cf. Hinze et al. (9). 3. Primal-dual active set strategy (8) Since (8) involves inequality constraints, we utilize a primal-dual active set strategy (PDASS); cf. Bergounioux et al. (1997). Let k and j denote the current SQP and PDASS iteration level, respectively. The PDASS method works on the primal variale u k, which has to satisfy the inequality constraint u k +u k U ad, and on the dual variale µ k, which corresponds to the control constraint. For a given iterate (u k,j, µ k,j ) of the PDASS algorithm we define the active and inactive sets A k,j a A k,j = { t [, T ] : u k,j (t) + u k,j (t) + µ k,j (t) < u a (t) }, = { t [, T ] : u k,j (t) + u k,j (t) + µ k,j (t) > u (t) }, A k,j = A k,j a L k uu e k, u e k y e k u A k,j, I k,j = [, T ] \ A k,j, where the inequalities are understood componentwise. Then, the solution of (8) is computed y solving its associated KKT system: L k yy e k, y y L k y + = u A k,j = χ A k,j a where we set x k,j+1 and e k, e k, y u z µ k,j+1 A k,j L k u e k (u a u k ) + χ A k,j (u u k ), (9) = (y, u ) and z k,j+1 = z. By u we denote the dual operators of e k y and e k u, Algorithm 1: SQP method with PDASS Require: Initial value (x, z ), set k = ; 1: repeat : Initialize (u k,, µ k, ) and set j = ; 3: repeat 4: Set j = j + 1; 5: Determine A k,j a, A k,j, A k,j and I k,j ; 6: Solve (9) for (y, u, z ) and µ k,j A k,j ; 7: Set j = j + 1, x k,j = (y, u ), z k,j = z, and µ k,j I k,j 1 = ; 8: until j 1, A k,j a = A k,j 1 a, A k,j = A k,j 1 9: Set x k+1 = x k + x k,j and z k+1 = z k + z k,j 1: until SQP stopping criterium is fulfilled; ; ; respectively. The SQP method with PDASS is summarized in Algorithm 1. In general, Algorithm 1 is only locally convergent. In order to ensure convergence for any starting point (x, z ), a gloalization strategy is utilized, which is made from two ingredients: a modification of the second-order term L k xx to ensure coercivity and the inclusion of an Armijo acktracking line search for an l 1 -merit function. For more details we refer to Hintermüller (1) for the general theory and to Gräßle (14) for our specific optimal control prolem. 3.3 Trust region method In this paper we compare Algorithm 1 with the trustregion (TR) method for the solution of (ˆP); cf. Conn et al. (). The key idea of TR methods is as follows: At a current iterate the ojective Ĵ is replaced y a simpler (often quadratic) model function which is then approximately minimized in a TR to find the next iterate. Assume we are in iteration k with current iterate u k. Then, we uild a model function m k to approximate the ojective Ĵ at uk. In Algorithm, we restrict ourselves to the quadratic model m k (d) = Ĵ(uk ) + g k d + 1 Hk (d, d), (1) where d is a mapping from [, T ] to R m, g k = Ĵ (u k ) holds and H k denotes the Hessian Ĵ (u k ) or a self-adjoint approximation to it. In line a trial step d k is computed y approximately solving the TR suprolem. In lines 4 to 13 it is then decided whether or not to accept the trial point u k + d k as next iterate and the TR radius k gets adjusted. This decision is ased upon the quotient ρ k in line 3 which compares the reduction pred k of the model function to the reduction ared k of the cost functional and

6 4 Gräßle, Guisch, Metzdorf, Rogg & Volkwein, POD asis updates for nonlinear PDE control Algorithm : (Trust region method) Require: Initial TR radius >, initial point u, constants η 1, η, γ 1, γ, γ 3 satisfying < η 1 η < 1, < γ 1 γ < 1 γ 3 ; set k = ; 1: Build up the model function m k ; : Determine an approximate solution d k to min d m k (d) s.t. u k + d U ad and d k ; 3: Compute the quotient ρ k = ared k pred k = Ĵ(uk ) Ĵ(uk +d k ) m k () m k (d k ) ; 4: if ρ k η then 5: Set u k+1 = u k + d k and k+1 [ k, γ 3 k ]; 6: Set k = k + 1 and go to line 1; 7: else if η 1 ρ k < η then 8: Set u k+1 = u k + d k and k+1 [γ (k), k ]; 9: Set k = k + 1 and go to line 1; 1: else if ρ k < η 1 then 11: Set u k+1 = u k and k+1 [γ 1 k, γ k ]; 1: Set k = k + 1 and go to line ; 13: end if which provides a measure for the approximation quality of the model function. In case ρ k η 1 (e.g. η 1 =.), the trial point is accepted. If even ρ k η (e.g. η =.8), the approximation quality of the model function is as good that the TR radius can e increased for the next iteration. Otherwise the radius should e kept or decreased. In case ρ k < η 1, the quadratic model is a poor approximation to the cost functional on the TR, the trial point is rejected and the TR suprolem gets again solved for a decreased TR radius. Disregarding standard assumptions, Algorithm is gloally convergent in the sense lim k g k = if the trial steps d k lead to a model decrease pred k at least as good as that of the so-called Cauchy step. If the model function possesses inexact gradient information g k Ĵ (u k ), gloal convergence still holds if the Carter condition Ĵ (u k ) g k ζ g k, ζ (, 1 η ) (11) 4 POD reduced-order modelling In this section we recall the asics of the POD method for optimal control prolems. For more details we refer, e.g., to Benner et al. (14); Sachs, Volkwein (1). 4.1 POD method Suppose that we are given trajectories z ν (t) = z ν (t, ) H, t [, T ], 1 ν and N. We introduce the snapshot suspace as V = span { z ν (t) t [, T ] and ν = 1,..., } with d = dim V. The inner product in H is given as ϕ, φ H = ϕ(x)φ(x) dx for all ϕ, φ H. For every l with 1 l d, a POD asis of rank l is defined as a solution to the minimization prolem min T ν=1 z ν (t) l z ν (t), ψ i H ψ i dt s.t {ψ i } l H, ψ i, ψ j H = ij, 1 i, j l. A solution to (1) is given y the eigenvalue prolem H (1) Rψ i = λ i ψ i for λ 1 λ... λ l λ d > (13) with the linear, ounded integral operator R : H V H, Rψ = T ν=1 z ν (t), ψ H z ν (t) dt. In our numerical experiments we consider =. We include y z 1 and z information of the (linearized) state and the dual equation, respectively. 4. Reduced-order modelling for the control prolem is satisfied in every iteration; see Carter (1991). We compute truncated Newton steps using the Steihaug-CG algorithm, see Nocedal, Wright (6), together with active set projection according to Kelley (1999). Hence, we fulfill this condition. Note that the computation of Ĵ (u k )d requires the solutions to a linearized state and linearized adjoint equation; cf. Hinze et al. (9). Suppose that a POD asis {ψ i } l is computed. We introduce the suspace V l = span {ψ 1,..., ψ l } and define the orthogonal projection P l ϕ = l ϕ, ψ i H ψ i for ϕ H. The POD scheme for (4) is as follows: y l (t) V l satisfies y l () = P l y in H and cyt(t)ψ l + y l (t) ψ + y(t) 3 ψ dx + Γ qy l (t)ψ dx = u i (t) Γ i ψ dx (14)

7 Gräßle, Guisch, Metzdorf, Rogg & Volkwein, POD asis updates for nonlinear PDE control 5 for all ψ V l and t (, T ]. Then, we replace (P) y the following POD approximation min J(y l, u) s.t. (y l, u) satisfies (14) and u U ad. (P l ) We assume that (14) has a unique solution y l (u) for any u U ad. Then, setting Ĵl(u) = J(y l (u), u) for any u U ad we otain the POD approximation of (ˆP) min Ĵl(u) s.t. u U ad. (ˆP l ) Suppose that ū l is a locally solution to (ˆP l ). We interpret ū l as a suoptimal solution to (ˆP). First-order necessary optimality conditions for (ˆP l ) are given y (compare (6)) T ( σ u ū l i(t) Γ i ) p l (t) dx (u i (t) ū l i(t)) dt for all u U ad, where p l (t) = p(t, ) V l is the weak solution to a POD Galerkin scheme for (7) utilizing the optimal POD state ȳ l = y l (ū). More details are presented in Gräßle (14); Metzdorf (15); Rogg (14). 4.3 POD asis update strategies The choice of a reference control u ref U ad turned out to e essential for the computation of a POD asis of rank l. When using an aritrary control, the otained accuracy was not at all satisfying even when using a huge numer of asis functions. On the other hand, an optimal POD asis (computed from the FE optimally controlled state) led to far etter results; Hinze, Volkwein (8). To overcome this prolem different techniques for improving the POD asis have een proposed Optimality system POD Let us just roughly recall optimality system POD (OS- POD) introduced in Kunisch, Volkwein (8). The idea of OS-POD is to include the equations determining the POD asis in the optimization process min Ĵl(u) s.t. u U ad and y(u) is the solution to (4), {ψ i } l solves (13) for = 1, z 1 = y(u), y l (u) satisfies (14). (P l os) A therey otained asis is optimal for the considered prolem. Note that (P l os) is more complex than the original prolem (P). Therefore, we do not solve (P l os). Starting with an initial control u U ad, we compute a few projected gradient steps for (P l os) in order to find a POD asis, which is appropriate for the underlying optimal control prolem. Then, this POD asis is kept fix and the POD approximation (P l ) of (P) is derived. We discuss two possiilities to use OS-POD: 1) First, starting with a reference control u ref U ad we apply a few projected gradient steps to solve (P l os) numerically. Then, the otained POD asis is kept fix and the POD approximation (P l ) is solved numerically y the SQP method. This approach is used in Grimm et al. (14); Metzdorf (15) for linear paraolic equations. ) In the second approach we utilize OS-POD within the SQP framework. In each SQP iteration a linearquadratic optimal control prolem has to e solved up to a certain accuracy. For each linear-quadratic SQP suprolem we formulate the OS-POD optimization prolem as in (P l os). Again, we apply a few (projected) gradient steps, ut now to the linearquadratic prolem in order to find a POD asis which is appropriate for the current SQP suprolem. Then, we apply the PDASS algorithm to solve the linear-quadratic SQP suprolem using the POD asis found y the OS-POD gradient steps Trust region POD Within the trust-region POD (TR-POD) algorithm we guarantee that the reduced-order models are sufficiently accurate y ensuring gradient accuracy according to the Carter condition (11); see Schu (1). This is realized y successively adapting the reduced-order models in the course of optimization just y increasing the numer l of POD asis functions or, if this does not lead to a sufficient accuracy, y computing a new POD asis from the current control iterate (and then adapting the numer l). Recall the approximative model gradient g k and the approximative Hessian H k from (1). In the TR-POD approach g k and H k are computed from the respective POD approximations of Ĵ: m k l (d) = Ĵ(uk ) + gl k d + 1 Hk l (d, d). The TR-POD method is summarized in Algorithm 3. The lower part is completely identical to Algorithm. But here, at the eginning of each iteration, the approximation quality of the current POD asis is tested and if necessary recomputed.

8 6 Gräßle, Guisch, Metzdorf, Rogg & Volkwein, POD asis updates for nonlinear PDE control Algorithm 3: TR-POD method Require: Initial TR radius >, initial point u, a maximal numer l max of POD asis functions and constants η 1, η, γ 1, γ, γ 3 satisfying < η 1 η < 1, < γ 1 γ < 1 γ 3 ; set k = ; 1: Compute a reduced-order model for some l l max ; : Guarantee (11) y computing new POD asis and/or expanding l l max ; 3: Build up the model function m k l (d); 4: Determine an approximate solution d k to min d m k l (d) s.t. uk + d U ad and d k ; 5: Compute the quotient ρ k = ared k pred k = Ĵ(uk ) Ĵ(uk +d k ) m k l () mk l (dk ) ; 6: Lines 4 to 13 of Algorithm. method ε u as ε u rel SQP-POD, no asis update OS-POD approach OS-POD approach SQP-POD, optimal asis TR-POD Tale 1. Run 1: errors etween POD and FE optimal controls t 1 SQP TR Numer i POD ased inexact SQP method For large-scale systems it is costly to solve each SQP suprolem (9) exactly. Thus, an inexact version of the SQP method is used, where the inexactness is caused y replacing (9) y its POD surrogate model; cf. Kahlacher, Volkwein (11). An a-posteriori error ound for linearquadratic optimal control prolems (see Tröltzsch, Volkwein (9)) is used to control the accuracy of the POD model and hence guarantee convergence. For more details we refer to Grimm et al. (14); Sachs, Volkwein (1). 5 Numerical Results Prolem setting. We choose = (, 1) (, 1) R and T =. The parameters are c = 1 and q =.1. We divide the oundary Γ into m = 4 parts so that the characteristic functions {χ Γi } 4 represent each of the four oundary parts. The cost parameter is σ u = For the spatial discretization we use piecewise linear finite elements (FE) with diameter h max =.6 leading to 498 degrees of freedom. As time integration scheme we utilize the implicit Euler method with N t = 5 equidistant time steps. The reference control for snapshot generation in the context of Section 4.1 is u ref =. The snapshot set for POD asis computation differs depending on the chosen optimization strategy. In case of TR-POD, state and adjoint state snapshots are taken, whereas for SQP linearized state and linearized adjoint state snapshots are utilized. Moreover, we define the asolute and relative Fig. 1. Run 1, left: SQP-POD controls {u i (t)} 4 for a fixed POD asis with l = 1 (lack line) and SQP-FE solution (lack dotted). Right: Decay of the eigenvalues for SQP-POD and TR- POD. errors ε u as = ūh u and ε u rel = εu as / ūh with u := ( () (u i ) t N t + (u (j) i j=1 i ) ) + (u(n t+1) to measure the difference etween any computed control u and the optimal FE control ū h. Run 1 (unconstrained case). At first we consider (P) without control constraints. We choose the desired state y d (x) = sin(π(x 1.5)) + 4x 1 x and the initial condition y (x) = sin(π(x 1.5)) for x = (x 1, x ). In the left plot of Fig. 1 we present the SQP-FE optimal controls (dotted lines) compared to the SQP-POD optimal controls otained with a fixed POD asis of rank l = 1. As can e seen, the accuracy is not satisfying; see also Tale 1, row 1. This confirms the necessity of a POD asis update strategy. The POD eigenvalues computed for SQP-POD and TR-POD are shown in the right plot of Figure 1. As first POD asis update strategy, we use the OS-POD approach 1) of Section One gradient step is applied to (P l os). Then, the POD asis of rank l = 1 is fixed and (P l ) is solved y the SQP method. This update strategy leads to an improvement of the error in the control variale of one order (see Tale 1, row ). The next POD asis update strategy consists in a comination of the POD ased inexact SQP method with OS-POD approach ) of Section Two SQP iterations are needed for convergence. The first SQP iteration is cheap, since l = 5 POD ases suffice. In the second SQP iteration, it is detected that a higher accuracy of the POD surro- )

9 Gräßle, Guisch, Metzdorf, Rogg & Volkwein, POD asis updates for nonlinear PDE control t t Fig.. Run 1: POD controls {u i (t)} 4 (lack line) and FE solution (lack dotted), OS-POD asis update (left) and TR-POD (right). gate model for the linear-quadratic SQP suprolem (8) is needed. Thus, we apply one gradient step to solve the OS-POD optimization prolem for the linear-quadratic SQP suprolem and the numer of utilized POD asis functions is increased to l = 1. The POD solution ū l for the time dependent control intensities coincides convincingly with the FE solution (see Fig., left and Tale 1, row 3). Since the POD asis is adapted to the desired accuracy within the solution process, this strategy leads to etter results than the first POD asis update strategy. For comparison, the error etween the FE solution ū h and the SQP-POD solution ū l with optimal POD asis of length l = 1 is listed in Tale 1, row 4. Now, we compare these results to those otained y TR-POD. Recall that the TR-POD algorithm guarantees convergence to a stationary point. This is why the deviation of the TR-POD optimal control from the FE optimal control is within discretization accuracy; see the last row in Tale 1 and Fig., right plot. The iterations of the TR-POD method are very similar to those of the POD ased inexact SQP method with OS-POD approach. In the first iteration l = 4 POD asis functions suffice to fulfill the Carter condition (11). In the second iteration a new POD asis gets computed and l = 1 POD modes are required. We visualize the adaptation of the POD asis during optimization y means of the shape of the first three asis functions. In Fig. 3 the adaptation corresponding the POD ased inexact SQP method with OS-POD approach is shown. For comparison, in Fig. 4 we present the POD asis adaptation otained from the TR-POD algorithm. Run (constrained case). Now we impose the ox constraints u a = and u = 7. We choose y d (x) = + x 1 x and y (x) = 3 4(x.5) for x = (x 1, x ). The approximation results for the POD solution ū l (t) with snapshots computed from the reference controls u ref = and u ref = ū h (t) are listed in Tale, row 1 and 4, respectively. The application of the POD asis update strategy y OS-POD approach 1) leads to a reduction of the error in the control y a factor of two in comparison to the case Fig. 3. Run 1, OS-POD approach ): POD ases ψ 1, ψ, ψ 3 computed in the first (top) and second iteration (middle) of SQP- POD method; optimal asis functions computed y reference control u ref = ū h (ottom). Fig. 4. Run 1, TR-POD: POD ases ψ 1, ψ, ψ 3 computed in the first (top) and second iteration (middle); optimal asis functions computed y the reference control u ref = ū h (ottom). method ε u as ε u rel SQP-POD, no asis update OS-POD approach OS-POD approach SQP-POD, optimal asis TR-POD Tale. Run : errors etween POD and FE optimal controls.

10 8 Gräßle, Guisch, Metzdorf, Rogg & Volkwein, POD asis updates for nonlinear PDE control t t Fig. 5. Run : controls {u i (t)} 4, lack dotted: FE solution, lack line: POD solution. Right: TR-POD. with no POD asis update. Finally, we utilize OS-POD approach ) in comination with the inexact SQP-POD method. In this case, four SQP iterations are needed for convergence. In the first two iterations, l = 3 POD modes suffice. In the third iteration, one gradient step is performed to compute approximatively a solution for the OS-POD optimization prolem for the linear-quadratic SQP suprolem. Moreover, the numer of utilized POD asis functions is increased to l = 7. In the last iteration, the numer of POD modes is increased to the maximum numer l = 1. The resulting POD controls are compared to the FE controls in the left plot of Fig. 5 as well as in row 3 of Tale. From the computational point of view, OS-POD approach 1) is the fastest. The computational time for snapshot generation and OS-POD asis computation is.6 sec. The SQP-POD run then takes 7.3 sec. In comparison, the SQP-FE run takes 85.8 sec. The TR- POD Algorithm needs five iterations for optimization. In the first three iterations the initial POD asis suffices to guarantee the Carter condition with only 3, 3 and 5 POD asis functions. In the fourth iteration a new POD asis is computed and l = 7 POD modes are needed. This asis of rank seven can then e kept for the last step. Acknowledgment: C. Gräßle and S. Metzdorf have een supported partially y the project A-posteriori error estimation for TR-POD methods in Design and Control financed y the University of Konstanz. References Arian, E., Fahl M., Sachs, E.W. Trust-region proper orthogonal decomposition for flow control. Technical Report -5, ICASE,. Bergounioux, M., Ito, K., Kunisch, K. Primal-dual strategy for constrained optimal control prolems. SIAM J. Control Optim., (1997) Benner, P., Sachs, E.W., Volkwein, S. Model order reduction for PDE constrained optimization. Internat. Ser. Numer. Math., , (14) Carter, R.G. On the gloal convergence of trust region algorithms using inexact gradient information. SIAM J. Numer. Anal., (1991) Conn, A.R., Gould, N.I.M., Toint, P.L. Trust-region methods. SIAM () Gräßle, C. POD ased inexact SQP methods for optimal control prolems governed y a semilinear heat equation. Diploma Thesis, University of Konstanz (14), Grimm, E., Guisch, M., Volkwein, S. A-Posteriori Error Analysis and Optimality-System POD for Constrained Optimal Control. Comp. Sci. Eng., 15, (14) Holmes, P., Lumley, J.L., Berkooz, G., Romley, C.W. Turulence, Coherent Structures, Dynamical Systems and Symmetry. Camridge Monographs on Mechanics, Camridge University Press (1) Hintermüller, M. On a gloalized augmented Lagrangian-SQP algorithm for nonlinear optimal control prolems with ox constraints. Internat. Ser. Numer. Math., (1) Hinze, M., Pinnau, R., Ulrich, M., Ulrich, R. Optimization with ODE constraints. Math. Mod., 3, Springer Series (9) Hinze, M., Volkwein, S. Error estimates for astract linearquadratic optimal control prolems using proper orthogonal decomposition. Comput. Optim. Appl., (8) Kahlacher, M., Volkwein, S. POD a-posteriori error ased inexact SQP method for ilinear elliptic optimal control prolems. ESAIM: MAN, (11) Kelley, C.T. Iterative Methods for Optimization. Frontiers in Applied Mathematics, SIAM, Philadelphia, PA (1999) Kunisch, K., Volkwein, S. Proper orthogonal decomposition for optimality systems. ESAIM: MAN 4, 1-3 (8) Metzdorf, S. Optimality system POD for timevariant, linear-quadratic control prolems. Diploma Thesis, University of Konstanz (15), Nocedal, J., Wright, S.J. Numerical Optimization. nd ed., Springer, New York (6) Qian, E., Grepl, M., Veroy, K., Willcox, K. A certified trust region reduced asis approach to PDE-constrained optimization. ACDL Technical Report TR16-3, 16 Rogg, S. Trust region POD for optimal oundary control of a semilinear heat equation. Diploma Thesis, University of Konstanz (14), Sachs, E.W., Volkwein, S. POD Galerkin approximations in PDE-constrained optimization. GAMM-Mitt. 33, (1) Schu, M. Adaptive trust-region POD methods and their applications in finance. Ph.D thesis, University of Trier (1) Tröltzsch, F. Optimal Control of Partial Differential Equations: Theory, Methods and Applications. AMS American Mathematical Society, nd ed. (1) Tröltzsch, F., Volkwein, S. POD a-posteriori error estimates for linear-quadratic optimal control prolems. Comput. Optim. Appl., (9) Yue, Y., Meerergen, K. Accelerating PDE-constrained optimization y model order reduction with error control. SIAM J. Optim. 3, (13)

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Technische Universität Berlin Martin Gubisch, Stefan Volkwein University of Konstanz March, 3 Martin Gubisch,

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein Universität Konstanz Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems Laura Iapichino Stefan Trenz Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 347, Dezember 2015

More information

Reduced-Order Methods in PDE Constrained Optimization

Reduced-Order Methods in PDE Constrained Optimization Reduced-Order Methods in PDE Constrained Optimization M. Hinze (Hamburg), K. Kunisch (Graz), F. Tro ltzsch (Berlin) and E. Grimm, M. Gubisch, S. Rogg, A. Studinger, S. Trenz, S. Volkwein University of

More information

Proper Orthogonal Decomposition in PDE-Constrained Optimization

Proper Orthogonal Decomposition in PDE-Constrained Optimization Proper Orthogonal Decomposition in PDE-Constrained Optimization K. Kunisch Department of Mathematics and Computational Science University of Graz, Austria jointly with S. Volkwein Dynamic Programming Principle

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Model order reduction & PDE constrained optimization

Model order reduction & PDE constrained optimization 1 Model order reduction & PDE constrained optimization Fachbereich Mathematik Optimierung und Approximation, Universität Hamburg Leuven, January 8, 01 Names Afanasiev, Attwell, Burns, Cordier,Fahl, Farrell,

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein Universität Konstanz Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems Giulia Fabrini Laura Iapichino Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 364, Oktober 2017 ISSN

More information

Eileen Kammann Fredi Tröltzsch Stefan Volkwein

Eileen Kammann Fredi Tröltzsch Stefan Volkwein Universität Konstanz A method of a-posteriori error estimation with application to proper orthogonal decomposition Eileen Kammann Fredi Tröltzsch Stefan Volkwein Konstanzer Schriften in Mathematik Nr.

More information

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control

Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control SpezialForschungsBereich F 3 Karl Franzens Universität Graz Technische Universität Graz Medizinische Universität Graz Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

K. Krumbiegel I. Neitzel A. Rösch

K. Krumbiegel I. Neitzel A. Rösch SUFFICIENT OPTIMALITY CONDITIONS FOR THE MOREAU-YOSIDA-TYPE REGULARIZATION CONCEPT APPLIED TO SEMILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH POINTWISE STATE CONSTRAINTS K. Krumbiegel I. Neitzel A. Rösch

More information

Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics

Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Excerpt from the Proceedings of the COMSOL Conference 2009 Milan Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Fikriye Yılmaz 1, Bülent Karasözen

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Algorithms for PDE-Constrained Optimization

Algorithms for PDE-Constrained Optimization GAMM-Mitteilungen, 31 January 2014 Algorithms for PDE-Constrained Optimization Roland Herzog 1 and Karl Kunisch 2 1 Chemnitz University of Technology, Faculty of Mathematics, Reichenhainer Straße 41, D

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL

OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL O. LASS, S. TRENZ, AND S. VOLKWEIN Abstract. In the present paper the authors consider an optimal control

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Martin Gubisch Stefan Volkwein

Martin Gubisch Stefan Volkwein Universität Konstanz POD a-posteriori error analysis for optimal control problems with mixed control-state constraints Martin Gubisch Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 34, Juni 3 ISSN

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Priority Programme The Combination of POD Model Reduction with Adaptive Finite Element Methods in the Context of Phase Field Models

Priority Programme The Combination of POD Model Reduction with Adaptive Finite Element Methods in the Context of Phase Field Models Priority Programme 1962 The Combination of POD Model Reduction with Adaptive Finite Element Methods in the Context of Phase Field Models Carmen Gräßle, Michael Hinze Non-smooth and Complementarity-based

More information

Lectures Notes Algorithms and Preconditioning in PDE-Constrained Optimization. Prof. Dr. R. Herzog

Lectures Notes Algorithms and Preconditioning in PDE-Constrained Optimization. Prof. Dr. R. Herzog Lectures Notes Algorithms and Preconditioning in PDE-Constrained Optimization Prof. Dr. R. Herzog held in July 2010 at the Summer School on Analysis and Numerics of PDE Constrained Optimization, Lambrecht

More information

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables Minimizing a convex separale exponential function suect to linear equality constraint and ounded variales Stefan M. Stefanov Department of Mathematics Neofit Rilski South-Western University 2700 Blagoevgrad

More information

ON CONVERGENCE OF A RECEDING HORIZON METHOD FOR PARABOLIC BOUNDARY CONTROL. fredi tröltzsch and daniel wachsmuth 1

ON CONVERGENCE OF A RECEDING HORIZON METHOD FOR PARABOLIC BOUNDARY CONTROL. fredi tröltzsch and daniel wachsmuth 1 ON CONVERGENCE OF A RECEDING HORIZON METHOD FOR PARABOLIC BOUNDARY CONTROL fredi tröltzsch and daniel wachsmuth Abstract. A method of receding horizon type is considered for a simplified linear-quadratic

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Multiobjective PDE-constrained optimization using the reduced basis method

Multiobjective PDE-constrained optimization using the reduced basis method Multiobjective PDE-constrained optimization using the reduced basis method Laura Iapichino1, Stefan Ulbrich2, Stefan Volkwein1 1 Universita t 2 Konstanz - Fachbereich Mathematik und Statistik Technischen

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

POD A-POSTERIORI ERROR BASED INEXACT SQP METHOD FOR BILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS

POD A-POSTERIORI ERROR BASED INEXACT SQP METHOD FOR BILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS ESAIM: M2AN 46 22) 49 5 DOI:.5/m2an/26 ESAIM: Mathematical Modelling and Numerical Analysis www.esaim-m2an.org POD A-POSTERIORI ERROR BASED INEXACT SQP METHOD FOR BILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS A. RÖSCH AND R. SIMON Abstract. An optimal control problem for an elliptic equation

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems Matthias Heinkenschloss 1 and Denis Ridzal 2 1 Department of Computational and Applied

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

A Posteriori Estimates for Cost Functionals of Optimal Control Problems

A Posteriori Estimates for Cost Functionals of Optimal Control Problems A Posteriori Estimates for Cost Functionals of Optimal Control Problems Alexandra Gaevskaya, Ronald H.W. Hoppe,2 and Sergey Repin 3 Institute of Mathematics, Universität Augsburg, D-8659 Augsburg, Germany

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

The HJB-POD approach for infinite dimensional control problems

The HJB-POD approach for infinite dimensional control problems The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,

More information

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations Abstract and Applied Analysis Volume 212, Article ID 391918, 11 pages doi:1.1155/212/391918 Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations Chuanjun Chen

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Generalized Newton-Type Method for Energy Formulations in Image Processing

Generalized Newton-Type Method for Energy Formulations in Image Processing Generalized Newton-Type Method for Energy Formulations in Image Processing Leah Bar and Guillermo Sapiro Department of Electrical and Computer Engineering University of Minnesota Outline Optimization in

More information

Optimal control of partial differential equations with affine control constraints

Optimal control of partial differential equations with affine control constraints Control and Cybernetics vol. 38 (2009) No. 4A Optimal control of partial differential equations with affine control constraints by Juan Carlos De Los Reyes 1 and Karl Kunisch 2 1 Departmento de Matemática,

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Constrained Minimization and Multigrid

Constrained Minimization and Multigrid Constrained Minimization and Multigrid C. Gräser (FU Berlin), R. Kornhuber (FU Berlin), and O. Sander (FU Berlin) Workshop on PDE Constrained Optimization Hamburg, March 27-29, 2008 Matheon Outline Successive

More information

A Recursive Trust-Region Method for Non-Convex Constrained Minimization

A Recursive Trust-Region Method for Non-Convex Constrained Minimization A Recursive Trust-Region Method for Non-Convex Constrained Minimization Christian Groß 1 and Rolf Krause 1 Institute for Numerical Simulation, University of Bonn. {gross,krause}@ins.uni-bonn.de 1 Introduction

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

NECESSARY OPTIMALITY CONDITIONS FOR AN OPTIMAL CONTROL PROBLEM OF 2D

NECESSARY OPTIMALITY CONDITIONS FOR AN OPTIMAL CONTROL PROBLEM OF 2D NECESSARY OPTIMALITY CONDITIONS FOR AN OPTIMAL CONTROL PROBLEM OF 2D g-navier-stokes EQUATIONS Nguyen Duc Loc 1 Abstract Considered here is the optimal control problem of 2D g-navier-stokes equations.

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Numerical Analysis of State-Constrained Optimal Control Problems for PDEs

Numerical Analysis of State-Constrained Optimal Control Problems for PDEs Numerical Analysis of State-Constrained Optimal Control Problems for PDEs Ira Neitzel and Fredi Tröltzsch Abstract. We survey the results of SPP 1253 project Numerical Analysis of State-Constrained Optimal

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Priority Program 1253

Priority Program 1253 Deutsche Forschungsgemeinschaft Priority Program 1253 Optimization with Partial Differential Equations Klaus Deckelnick and Michael Hinze A note on the approximation of elliptic control problems with bang-bang

More information

A Trust-region Algorithm for the Optimization of PSA Processes using Reduced-order Modeling

A Trust-region Algorithm for the Optimization of PSA Processes using Reduced-order Modeling A Trust-region Algorithm for the Optimization of PSA Processes using Reduced-order Modeling Anshul Agarwal Advisor: Lorenz T. Biegler CAPD Annual Review Meeting March 7, 2010 Pressure Swing Adsorption

More information

Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function

Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Michael Ulbrich and Stefan Ulbrich Zentrum Mathematik Technische Universität München München,

More information

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

On sequential optimality conditions for constrained optimization. José Mario Martínez  martinez On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani

More information

Semismooth Newton Methods for an Optimal Boundary Control Problem of Wave Equations

Semismooth Newton Methods for an Optimal Boundary Control Problem of Wave Equations Semismooth Newton Methods for an Optimal Boundary Control Problem of Wave Equations Axel Kröner 1 Karl Kunisch 2 and Boris Vexler 3 1 Lehrstuhl für Mathematische Optimierung Technische Universität München

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction

Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction Problem Jörg-M. Sautter Mathematisches Institut, Universität Düsseldorf, Germany, sautter@am.uni-duesseldorf.de

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Scalable BETI for Variational Inequalities

Scalable BETI for Variational Inequalities Scalable BETI for Variational Inequalities Jiří Bouchala, Zdeněk Dostál and Marie Sadowská Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science, VŠB-Technical University

More information

STATE-CONSTRAINED OPTIMAL CONTROL OF THE THREE-DIMENSIONAL STATIONARY NAVIER-STOKES EQUATIONS. 1. Introduction

STATE-CONSTRAINED OPTIMAL CONTROL OF THE THREE-DIMENSIONAL STATIONARY NAVIER-STOKES EQUATIONS. 1. Introduction STATE-CONSTRAINED OPTIMAL CONTROL OF THE THREE-DIMENSIONAL STATIONARY NAVIER-STOKES EQUATIONS J. C. DE LOS REYES AND R. GRIESSE Abstract. In this paper, an optimal control problem for the stationary Navier-

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik A finite element approximation to elliptic control problems in the presence of control and state constraints Klaus Deckelnick and Michael Hinze Nr. 2007-0

More information

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,

More information

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration Address: Prof. K. Schittkowski Department of Computer Science University of Bayreuth D - 95440

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Empirical Interpolation of Nonlinear Parametrized Evolution Operators

Empirical Interpolation of Nonlinear Parametrized Evolution Operators MÜNSTER Empirical Interpolation of Nonlinear Parametrized Evolution Operators Martin Drohmann, Bernard Haasdonk, Mario Ohlberger 03/12/2010 MÜNSTER 2/20 > Motivation: Reduced Basis Method RB Scenario:

More information

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION New Horizon of Mathematical Sciences RIKEN ithes - Tohoku AIMR - Tokyo IIS Joint Symposium April 28 2016 AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION T. Takeuchi Aihara Lab. Laboratories for Mathematics,

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

A Reduced Rank Kalman Filter Ross Bannister, October/November 2009

A Reduced Rank Kalman Filter Ross Bannister, October/November 2009 , Octoer/Novemer 2009 hese notes give the detailed workings of Fisher's reduced rank Kalman filter (RRKF), which was developed for use in a variational environment (Fisher, 1998). hese notes are my interpretation

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

A Priori Error Analysis for Space-Time Finite Element Discretization of Parabolic Optimal Control Problems

A Priori Error Analysis for Space-Time Finite Element Discretization of Parabolic Optimal Control Problems A Priori Error Analysis for Space-Time Finite Element Discretization of Parabolic Optimal Control Problems D. Meidner and B. Vexler Abstract In this article we discuss a priori error estimates for Galerkin

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information