A CERTIFIED TRUST REGION REDUCED BASIS APPROACH TO PDE-CONSTRAINED OPTIMIZATION

Size: px
Start display at page:

Download "A CERTIFIED TRUST REGION REDUCED BASIS APPROACH TO PDE-CONSTRAINED OPTIMIZATION"

Transcription

1 SIAM J. SCI. COMPUT. Vol. 39, o. 5, pp. S434 S460 c 2017 Elizabeth Qian, Martin Grepl, Karen Veroy & Karen Willcox A CERTIFIED TRUST REGIO REDUCED BASIS APPROACH TO PDE-COSTRAIED OPTIMIZATIO ELIZABETH QIA, MARTI GREPL, KARE VEROY, AD KARE WILLCOX Abstract. Parameter optimization problems constrained by partial differential equations PDEs) appear in many science and engineering applications. Solving these optimization problems may require a prohibitively large number of computationally expensive PDE solves, especially if the dimension of the design space is large. It is therefore advantageous to replace expensive high-dimensional PDE solvers e.g., finite element) with lower-dimensional surrogate models. In this paper, the reduced basis RB) model reduction method is used in conjunction with a trust region optimization framewor to accelerate PDE-constrained parameter optimization. ovel a posteriori error bounds on the RB cost and cost gradient for quadratic cost functionals e.g., least squares) are presented and used to guarantee convergence to the optimum of the high-fidelity model. The proposed certified RB trust region approach uses high-fidelity solves to update the RB model only if the approximation is no longer sufficiently accurate, reducing the number of full-fidelity solves required. We consider problems governed by elliptic and parabolic PDEs and present numerical results for a thermal fin model problem in which we are able to reduce the number of full solves necessary for the optimization by up to 86%. Key words. model reduction, optimization, trust region methods, partial differential equations, reduced basis methods, error bounds, parametrized systems AMS subject classifications. 35J20, 35K10, 49K20, 65K10, 65M15, 90C06, 90C30 DOI /16M Introduction. Optimization problems governed by partial differential equations PDEs) appear in many settings across engineering and science disciplines, including engineering design optimization, optimal control problems, and inverse problems. Because typical optimization algorithms require numerous PDE evaluations, using classical discretization techniques e.g., finite element) to solve these problems may be time-consuming and, in some cases, prohibitively expensive. One way to accelerate the solution of these problems is to replace expensive PDE evaluations with cheaper surrogate models. In this paper, we consider surrogate models based on projection-based reduced models. The use of surrogate models in optimization has an extensive literature see, e.g., the review in [12]). Our interest is in formulations that retain convergence guarantees even when approximate information is employed throughout the optimization Received by the editors July 1, 2016; accepted for publication in revised form) February 24, 2017; published electronically October 26, Funding: The wor of the first author was supported by the U.S. Fulbright Student Program, the ational Science Foundation Graduate Research Fellowship, and the Fannie and John Hertz Foundation. The MIT authors also acnowledge the support of the U.S. Department of Energy, Office of Advanced Scientific Computing Research ASCR), Applied Mathematics Program, awards DE- FG02-08ER2585 and DE-SC , as part of the DiaMonD Multifaceted Mathematics Integrated Capability Center. The wor of the second and third authors was supported by the Excellence Initiative of the German federal and state governments and the German Research Foundation through grant GSC 111. Department of Aeronautics and Astronautics, MIT, Cambridge, MA elizqian@mit.edu, willcox@mit.edu). Institute for Geometry and Applied Mathematics, RWTH Aachen University, Aachen 52056, Germany grepl@igpm.rwth-aachen.de). Aachen Institute for Advanced Study in Computational Engineering Science AICES), RWTH Aachen University, Aachen 52062, Germany veroy@aices.rwth-aachen.de). S434

2 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S435 solution process. Trust region methods are one class of approaches that have a rich history of convergence results; see, for example, [9], for a detailed discussion of trust region methods. Traditionally, trust region methods replaced high-fidelity objective function evaluations with local linear or quadratic Taylor expansions. These local approximations automatically satisfy first-order consistency conditions i.e., the approximate model s objective and gradient evaluations are locally exact), which in turn provide guarantees that the resulting optimization solution will satisfy the optimality conditions of the original high-fidelity system. The influence of inexact gradient information is considered in [7, 28], and of inexact gradient and function information in [6, 8, 9]. In [1], the authors consider a trust region framewor with more general approximation models of varying fidelity and show how adaptive corrections may be used to achieve the first-order consistency conditions required to achieve a provably convergent formulation for general approximation models. In addition to providing a theoretical framewor that yields a convergent surrogate-based optimization formulation, trust region methods also provide an iterative framewor for adaptation of the surrogate to the optimization problem of interest. Generating globally accurate surrogate models is typically prohibitively expensive, particularly when the underlying system is governed by PDEs. Thus, approaches that tailor the surrogate model in our case a projection-based reduced model to the optimization problem are of particular interest. While a number of adaptation approaches have been proposed for projection-based reduced models see, e.g., [10, 20, 23, 25]), the challenge in the optimization setting is that regions of interest are not nown a priori. Iterative approaches that adapt the reduced model as the optimization progresses have been considered in [5, 24]. In this paper we similarly adapt the reduced model as the optimization progresses, while also constructing our adaptation so as to rigorously address the convergence of the resulting optimization formulation. We use the reduced basis RB) method, a projection-based reduced-order modeling method, together with a trust region approach. The use of projection-based reduced models as surrogates in trust region optimization was first explored using proper orthogonal decomposition POD) in [2]. In [2], the authors assume an upper bound on the inexactness of the function and gradient information resulting from the POD model and prove convergence of their algorithm using the results from [6, 28]. Unfortunately, verification of this upper bound in practice requires evaluation of the high-fidelity model. The wor [33] further extends the results of [6, 28] to prove convergence to a high-fidelity optimum) of a modified trust region algorithm that relies only on surrogate reduced-order model evaluations. In particular, [33] shows that by using upper bounds to the error in the cost functional and the gradient approximations, one can prove that the trust region solution converges to the exact solution. The wor [33] further introduces heuristic error estimators for Krylov Padé interpolatory reduced models and uses these estimators in the modified trust region algorithm. However, its reliance on heuristic estimators means that only heuristic convergence could be demonstrated and realized in practice. Heuristic error indicators have also been applied to trust region optimization for POD models [34], and in a stochastic context, an approximation based on sparse grids [19]. The RB method is a reduced-order modeling technique for parametrized PDEs which supports rigorous a posteriori error estimation see [27] for a review). We propose an RB trust region method for solving optimization problems constrained by elliptic and parabolic PDEs which avoids the costly offline phase of the traditional RB method and iteratively builds the reduced model along the optimization trajectory as the algorithm progresses. After introducing the problem statement in section 2, we

3 S436 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX present the following contributions: 1. In section 3, we present rigorous a posteriori error bounds for the optimization cost functional and its gradient. Our bounds are based on a primal-dual formulation and are rigorous and efficiently computable. The dual formulation permits us to efficiently evaluate the gradient of the cost functional and at the same time derive error bounds for the cost functional which are superlinearly convergent with respect to the primal and dual error bound. 2. The error bounds play a crucial role in the RB trust region method introduced in section 4 unlie heuristic error indicators, they allow us to rigorously show convergence of the proposed approach to the unnown) high-fidelity optimum. Furthermore, they allow us to efficiently control the accuracy of the RB surrogate model during the optimization. We avoid the computationally expensive offline phase and build the reduced model adaptively along the optimization trajectory, thus eeping the number of high-fidelity solves to a minimum. In section 5, we present numerical results for parameter optimization problems constrained by elliptic and parabolic PDEs. We consider a thermal fin model problem with up to six variable parameters and compare the performance of our proposed RB trust region approach to that of a traditional optimization using high-fidelity PDE evaluations. We also compare it to a classical RB approach, where the reduced model is first generated during an offline stage and then used for the optimization in the online stage. 2. Problem formulation. In this section we introduce the PDE-constrained parameter optimization problem for both the elliptic and parabolic settings Preliminaries. Let Ω be a physical domain in R d with Lipschitz continuous boundary Ω. We define the Hilbert space X e such that H 1 Ω) X e H0 1 Ω) and Y e := L 2 Ω), where H 1 Ω) = { v v L 2 Ω), v L 2 Ω) ) d}, H 1 { 0 Ω) = v v H 1 Ω), v Ω = 0 }, and L 2 Ω) is the space of square-integrable functions over Ω. We associate with X e and Y e the inner products w, v) X e and w, v) Y e as well as the induced norms X e =, ) X e and Y e =, ) Y e, respectively; for example, w, v) X e := Ω w v + Ω wv w, v Xe, and w, v) Y e := Ω wv w, v Y e. We denote the corresponding dual spaces by X e and Y e. The superscript e indicates that we are dealing with the exact continuous domain. Finally, let D R P be a P -dimensional set in which our P -tuple parameter := 1,..., P ) resides. We now define the conforming -dimensional finite element FE) approximation space X X e and define Y := Y e, inheriting inner product and norm definitions from X e and Y e, respectively. For the parabolic case, we directly consider a time-discrete framewor associated to the time interval I := ]0, t f ], where Ī := [0, t f ] is divided into K uniform subintervals of length t = t f K. We introduce K := {1,..., K} for notational convenience and define t := t K, and finally I := {t 0,..., t }. We shall assume that and K are large enough i.e., X is sufficiently rich and the timediscretization sufficiently fine such that the FE approximation guarantees a desired accuracy over the whole parameter domain D. We introduce the parameter-dependent bilinear form a, ; ) : X X R and its derivative in the ith component of, a i, ; ) : X X R i {1,..., P }. We also introduce the parameter-independent bilinear forms m, ) : X X R and

4 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S437 d, ) : X X R. We assume that all bilinear forms are continuous: D, 1) 2) 3) 4) 0 < γ a ) := sup 0 < γ ai ) := sup sup w X\{0} v X\{0} sup w X\{0} v X\{0} 0 < γ m := sup 0 < γ d := sup a i w, v; ) w X v X sup w X\{0} v X\{0} sup w X\{0} v X\{0} aw, v; ) w X v X γ a 0 <, γ ai 0 <, i = 1,..., P, mw, v) w Y v Y <, dw, v) w X v X <. We also assume they are symmetric, i.e., w, v X, D, av, w; ) = aw, v; ), a i v, w; ) = a i w, v; ), mv, w) = mw, v), and dv, w) = dw, v). Additionally, we assume that a, ; ) and m, ) are coercive: 5) 0 < α0 a av, v; ) α) := inf v X v 2 X D, 0 < α m 0 := inf v X mv, v) v 2. Y We next introduce two X-continuous linear functionals, the parameter-dependent f ; ) : X R and the parameter-independent l ) : X R. Finally, we assume that all parameter-dependent linear and bilinear forms depend affinely on functions of the parameter ; i.e., we require that aw, v; ) and fv; ) can be expressed as Q a 6) aw, v; ) = Θ q a)a q w, v), fv; ) = Θ q f )f q v) q=1 w, v X and D, where Q a and Q f are some preferably) small integers, the functions Θ q a), Θ q f ) : D R are twice continuously differentiable and depend on, but the continuous bilinear and linear forms a q, ) : X X R and f q : X R do not depend on. We note that the functions Θ q a) and Θ q f ) may depend nonlinearly on the parameter, and thus the forms aw, v; ) and fv; ) may also depend nonlinearly on. For simplicity, we assume that m, ), d, ), and l ) are parameter-independent, although extensions to affine parameter dependence are readily admitted [27]. We note that the bilinear and linear forms d and l will appear as the quadratic and linear cost terms in the next section, and that the bilinear form m represents the mass term in the parabolic problem statement. For the development of the a posteriori error bounds we also require the following ingredients. We assume that we have access to a positive lower bound α LB ) : D R + for the coercivity constant α) defined in 5) such that 7) 0 < α a 0 α LB ) α) D and an upper bound for the continuity constants γ ai ) defined in 2) such that 8) γ UB a i ) γ ai ) D. We note that these lower and upper bounds are used in the a posteriori error bound formulation to replace the actual coercivity and continuity constants, respectively. We thus require that these lower and upper bounds can be efficiently evaluated online, i.e., the computational cost is independent of the FE dimension. Various recipes exist to obtain such bounds [18, 27]; see subsection 3.3 for more details. Q f q=1

5 S438 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX 2.2. Elliptic PDE-constrained optimization. We consider the constrained minimization of the output least-squares cost functional 9a) 9b) min D Lu)) g ref 2 D + λr) s.t. u) X satisfies au), v; ) = fv; ) v X, where L : X D is a linear output) functional and g ref D is a reference output, e.g., obtained from experimental measurements. Furthermore, D is a suitable Hilbert space of observations with inner product, ) D and induced norm D =, ) D, and λ R + and with the twice continuously differentiable function R : D R form a scaled regularization term. Given our assumptions, it follows that the cost functional is continuous, and thus at least one solution to 9) exists [16]. ote that we do not consider the parameter constraint setting in this paper. We next expand 9a) to get Lu) g ref, Lu) g ref ) D = Lu), Lu)) D 2Lu), g ref ) D + g ref, g ref ) D. Thus, defining dw, v) := Lw), Lv)) D w, v X and lv) := 2Lv), g ref ) D v X and dropping the constant term g ref, g ref ) D, we obtain the following equivalent formulation for the optimization problem: 10a) 10b) min J) where J) := du), u)) + lu)) + λr) D s.t. u) X satisfies au), v; ) = fv; ) v X. In what follows, we will use this more general quadratic cost formulation in developing the theory of the method. Gradient-based optimization methods require access to the cost derivatives, which may be efficiently calculated using adjoint methods. We thus introduce the FE adjoint dual) problem associated with our primal problem and cost in 10) [29, 16] as follows: Given D and the associated solution u) to 10b), find p) X satisfying 11) av, p); ) = 2du), v) + lv) v X. We also introduce the gradient of the cost function, J) R P, with respect to the parameter given by 12) J) = J) 1 J) 2... ) T J), P where [16] 13) J) i = f i p); ) a i u), p); ) + λ R) i, i = 1,..., P, is the partial derivative of the cost function with respect to the ith parameter i. We note that the formulation of the adjoint problem 11) is tied to the specific cost functional 10a). The approach presented in this paper thus only holds for quadratic and/or linear cost functionals, which are prevalent in applications. Other functionals would require different adjoint problems tailored to the specific case.

6 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S439 Remar 1. The trust region approach to be introduced in section 4 requires the Lipschitz continuity of the cost functional and its gradient. Given our assumptions stated in subsection 2.1, it is shown in [11] that the solution of the primal problem 10b) is indeed Lipschitz continuous with respect to the parameter ; i.e., given any, D, there exists a positive constant C such that u) u ) X C. It thus follows from the continuity of d, ) and l ) that the solution of the adjoint problem 11) is also Lipschitz continuous. Finally, we conclude from the definitions of the cost functional 10a) and its gradient 13) invoing again the continuity of the involved bi)linear forms and the continuous differentiability of the functions Θ q a), Θ q f ) that the cost functional and its gradient are also Lipschitz continuous with respect to the parameter Parabolic PDE-constrained optimization. The parabolic optimization formulation is analogous to the elliptic case. We therefore directly consider the following time-discrete) constrained minimization problem with quadratic cost: 14a) 14b) 14c) min J), where J) := t K [ d u ), u ) ) + l u ) )] + λr), D s.t. u ) X satisfies m u ) u 1 ), v ) + a u ), v; ) = fv; )y t ) v X, K, t with initial condition u 0 ) = 0, where y t ) is a nown) time-dependent forcing input, and we assume zero initial conditions for simplicity. ote that we consider an Euler-bacward discretization for the time integration; however, we can also readily treat higher-order schemes such as Cran icolson. In this wor we assume and thus limit our discussion to time-independent regularization terms. Similar to the elliptic case, we introduce the FE adjoint dual) problem associated with our primal problem 14b) and cost in 14a): Given D and the associated solution u ), K, to 14b), the adjoint p ) X, K 1, satisfies 15) m v, p ) p +1 ) ) + a v, p ); ) = 2d u ), v ) + lv) v X, K, t with final condition p K+1 ) = 0. ote that the adjoint field variable evolves bacward in time. Similar to the elliptic case, we define the gradient, J) R P, with entries 16) J) i = t [ fi p ) ) a i u ), p ); )] + λ R), i = 1,..., P, i which are the partial derivatives of the cost functional 14a) with respect to the ith parameter i. We briefly comment on the Lipschitz continuity of the cost functional and its gradient in the time-discrete parabolic case as stated in this section. To this end, we first note that the result in [11], i.e., directly showing the Lipschitz continuity of the solution to the elliptic problem, can also be extended to the parabolic setting. We

7 S440 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX also note that Lipschitz continuity follows from the boundedness of the sensitivity derivative which, given our assumptions, is shown for the parabolic state equation in [13]. 3. Reduced basis method. The RB method is a projection-based model reduction method for parametrized PDEs [27]. Traditionally, it consists of an expensive, time-consuming offline phase, in which the reduced basis is built, and an inexpensive online phase, during which the prebuilt RB may be exploited for rapid and certified simulations of the PDE at any parameter within the admissible parameter domain. In this section, we present primal-dual RB approximations and associated novel a posteriori error estimation procedures for the elliptic and parabolic PDE-constrained parameter optimization problems introduced in the last section. To this end, we employ the RB approximations as surrogate models in the optimization problems 10) and 14) and develop new rigorous and efficiently evaluable error bounds for the cost functional and its gradient. In this wor, we leverage these new error bounds to brea from the offline/online paradigm in the optimization; i.e., we build the RB approximation on the fly during the iterative optimization procedure. Our error bounds guide the RB updates and at the same time allow us to guarantee convergence of the surrogate optimization to the unnown) optimal solution of the original FE) optimization problem. We note, however, that the results presented here also apply to the traditional offline/online RB setting. Subsections 3.1 and 3.2 present the RB approximation and error estimation results for the elliptic and parabolic case, respectively. Subsection 3.3 discusses the computational aspects of the RB approximation Elliptic problems. This section introduces the RB approximation and error estimation results for the elliptic optimization problem 10) Approximation. Given X-orthogonal sets of primal and dual basis vectors ζ n and ψ n, n = 1,...,, we denote the -dimensional primal and dual RB approximation spaces by X pr and Xdu, defined as X pr X du We will comment on how pr n := span{ζ n, 1 n } = span{u pr n ), 1 n }, := span{ψ n, 1 n } = span{p du n ), 1 n }. and du n are chosen in subsections 3.3 and 4.1. For simplicity, we assume that the dimensions of the primal and dual RB spaces are the same, but results in this section extend directly to the case with different dimensions. The RB approximation is then obtained via a Galerin projection: Given D, the RB primal approximation u ) X pr satisfies 17) au ), v; ) = fv; ) v X pr, and the RB dual approximation p ) X du is given by 18) av, p ); ) = 2du ), v) + lv) v X du. The RB cost functional and its derivative with respect to i can the be computed from 19) 20) J ) = du ), u )) + lu )) + λr), J ) i = f i p ); ) a i u ), p ); ) + λ R) i.

8 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S A posteriori error estimation. We turn to the a posteriori error bounds. We first require the following. Definition 2. The residuals of the primal and dual equations are defined by 21) 22) r pr v; ) := fv; ) au ), v; ) v X, D, r du v; ) := 2du ), v) + lv) av, p ); ) v X, D. We also define the primal and dual errors as follows: 23) e pr ) := u) u ) and e du ) := p) p ). We can now prove the following. Lemma 3. Let u) and u ) be the solutions to 10b) and 17), respectively. Furthermore, let p) and p ) be the solutions to the associated dual equation 11) and 18). The error in the primal variable, e pr ) = u) u ), is bounded by 24) e pr ) X pr ) := rpr ; ) X α LB ) D and the error in the dual variable, e du ) = p) p ), by r du 25) e du ) X du ; ) ) := X + 2γ d pr ) α LB ) D. Proof. The bound 24) is standard; see, e.g., [27]. We follow an analogous procedure to show 25). We first note from 11) and 22) that the dual error satisfies 26) av, e du ); ) = r du v; ) + 2de pr ), v). Choosing v = e du ) and invoing 5), 7), and 4) we obtain α LB ) e du ) 2 X r du ; ) X e du ) X + 2γ d e pr ) X e du ) X. The result 25) then follows from 24). We may now consider the error in the cost functional and its gradient. Theorem 4. The error in the cost functional, e J ) := J) J ), satisfies 27) e J ) J ) := r du ) X pr ) + γ d pr )2 + r pr p ); ) D, where pr ) is the primal bound defined in Lemma 3. Proof. It follows from 10a) and 19) that e J ) = du), u)) du ), u )) + le pr )). Adding and subtracting r pr p ); ) on the right-hand side and recalling the primal error-residual relationship, ae pr ), v; ) = r pr v; ) v X, we obtain e J ) = du), u)) du ), u )) + le pr )) + r pr p ); ) ae pr ), p ); ).

9 S442 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX If we also add and subtract the term 2du ), e pr )) on the right-hand side and invoe 22), it follows that e J ) = r du e pr ); ) + du), u)) du ), u )) + r pr p ); ) 2du ), e pr )). Expanding du ), e pr )) = du ), u) u )), we obtain e J ) = r du e pr ); ) + r pr p ); ) + du), u)) 2du ), u)) + du ), u )) = r du e pr ); ) + r pr p ); ) + de pr ), e pr )). Using the continuity of the bilinear form d yields e J ) r du ) X e pr ) X + γ d e pr ) 2 X + rpr p ); ). The desired result directly follows from Lemma 3. Before presenting the result for the cost gradient, we mae several remars. First, since our goal is to develop effective a posteriori error bounds for the cost functional as opposed to increasing the accuracy of the RB cost functional, 1 we incorporate the residual correction term in the bound 27) instead of correcting the RB cost functional; see, e.g., the discussion in [30]. Second, the dual problem plays two roles in our setting: it allows us to i) efficiently compute the cost gradient from 20) without having to resort to sensitivity derivatives, and ii) devise an a posteriori error bound for the cost functional which converges superlinearly to zero as with respect to the primal and dual bounds [26]. Finally, we note that certified RB approximations for quadratic outputs have been previously considered in [17]. As opposed to the dual problem defined in 11) in this paper, the authors in [17] introduce a dual problem which is dependent on the RB solution u ), e.g., for p) X, av, p); ) = du) + u ), v) + lv) v X. Although we would obtain a similar bound to 27) for the cost functional using this formulation, the dual variable p) thus defined cannot be used to compute the cost gradient from 13). We now turn to the error bound for the cost gradient. Theorem 5. The error in the cost gradient, e J ) = J) J ), satisfies 28) e J ) J ) := J, ) where is the Euclidean norm and J ) is a vector whose ith component is the bound on the error in the ith component of the gradient, given by 29) i J ) = f i ; ) X du ) + γa UB i ) pr ) du ) + pr ) p ) X + u ) X du ) ), where pr ) and du ) are the primal and dual error bounds defined in Lemma 3. 1 We will observe in section 5 that the RB cost functional as defined in 19) is sufficiently accurate for our purposes.

10 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S443 Proof. We consider the error in the derivative of the cost with respect to i, the ith element of the parameter vector. It follows from 13) and 20) that 30) e i J ) = f i e du ); ) a i u), p); ) a i u ), p ); )). We next note that 31) a i u), p); ) a i u ), p ); ) = a i e pr ), e du ); ) + a i e pr ), p ); ) + a i u ), e du ); ). Plugging 31) into 30) and invoing 2) and 8), we obtain 32) e i J ) f i ; ) X e du ) X + γ UB a i ) e pr ) X e du ) X + γ UB a i ) e pr ) X p ) X + γ UB a i ) u ) X e du ) X. The result 29) then follows from Lemma 3, and 28) is obtained by taing the norm of all components. In contrast to the a posteriori error bound for the cost functional, the bound for the gradient does not exhibit a superlinear convergence with respect to the primal and dual error. However, even fairly large relative errors of 50% or more) in the gradient are permissible in the trust-region framewor without jeopardizing the overall convergence [6, 8]. We thus expect our gradient error bound to be sufficient to guarantee convergence of the RB trust region approach; see also the discussion in subsection 4.1 and the numerical results in section 5. Although we consider only upper bounds for the various error terms in this paper, lower bounds are sometimes of interest from a theoretical point of view, e.g., to quantify the convergence properties of greedy approaches to construct the reduced basis. Although it is possible to derive lower bounds for the error terms in Lemma 3 see [27] for the primal error), similar results are not nown for the error in the cost functional and gradient as well as for the parabolic case discussed in the next section. However, the convergence theory for the trust region method discussed in section 4 only requires upper bounds, which is the main focus of the paper Parabolic problems. This section introduces the RB approximation and error estimation results for the parabolic case Approximation. We introduce the primal and dual RB spaces X pr = span{ζ n, 1 n }, X du = span{ψ n, 1 n }, where the ζ n and the ψ n ), n = 1,...,, are mutually X-orthogonal basis functions. We comment on their construction in subsections 3.3 and 4.1. The primal and dual RB approximations are obtained from a Galerin projection: Given D, the primal approximation u ) Xpr to u ) X satisfies 33) m u ) u 1 ), v) + a u t ), v; ) = fv; )y t ) v X pr, and the dual approximation p ) Xdu to p ) X is given by 34) m v, p ) p+1 )) + a v, p t ); ) = 2d u ), v ) + lv) v X du.

11 S444 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX We can then calculate the RB cost and its derivative with respect to the i via 35) 36) J ) := t J ) i = t [ d u ), u ) ) + l u ) )] + λr), [ fi p ) ) a i u ), p ); )] + λ R). i A posteriori error estimation. The a posteriori error estimation procedure for the parabolic problem is analogous to that of the elliptic problem. In this section, we present the error bounds necessary for the trust region approach proposed in section 4, deferring proofs to Appendix A. We first introduce the residuals in Definition 6. Definition 6. The residuals of the primal and dual equations are defined by 37) 38) r prv; ) = fv)y t ) a u ), v; ) 1 t m u ) u 1 ), v), r duv; ) = 2d u ), v ) + lv) a v, p ); ) 1 t m v, p ) p +1 )) v X and D. For the parabolic case, we also require the spatiotemporal energy norms for the primal and dual problem as follows. Definition 7. The spatiotemporal energy norms for the primal and dual problem are given by 39a) 39b) v ) pr := v ) du := [ [ mv ), v )) + t mv 1 ), v 1 )) + t av ), v ); ) =1 av ), v ); ) = ] 1 2 ] 1 2 v X, v X. We may now prove the following results for the primal and dual RB errors. Lemma 8. Let u ) and u ), K, be the solutions to 14b) and 33), and let p ) and p ), K, be the solutions to the associated dual equations 15) and 34). Then the following bounds for the error in the primal variable, e pr) = u ) u ), and the dual variable, e du ) = p ) p ), hold D: 40) 41) e K pr) pr pr,k ) := e 1 du du du,1) := 8γ 2 d pr t α LB ),K ) α LB ) rpr ; ) ) 1 2 2, X ) t α LB ) 1 r du ; ) 2 2. X With Lemma 8 in hand, we may bound the parabolic cost and cost gradient as we did in Theorems 4 and 5 for the elliptic case.

12 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S445 Theorem 9. The error in the cost functional, e J ) := J) J ), may be bounded by 42) e J ) J ) := t where pr,k ) is defined in Lemma 8. ) 1 2 pr,k ) rdu ; ) 2 ) X αlb ) + γ d α LB ) pr,k ))2 + t r pr p ); ) D, Theorem 10. The error in the cost gradient, e J ) = J) J ), satisfies 43) e J ) J ) := J ) D, where J ) is a vector whose ith component is the bound on the error of the ith component of the gradient, given by 44) i J ) := t ) 1 2 f i ; ) 2 du,1 ) X αlb ) + γub a i ) α LB ) pr,k ) du,1) + γub a i ) αlb ) pr,k ) t p ) 2 X + γub a i ) αlb ) du,1) t and pr,k ) and du,1 ) are defined in Lemma 8. ) 1 2 u ) 2 X ) 1 2, 3.3. Computational procedure. Lie other model reduction methods, the RB method is traditionally divided into a computationally expensive offline phase and a computationally efficient online phase. A detailed discussion of the necessary computations and computational cost can be found, e.g., in [27]; we thus only present a short summary and focus on the main ingredients and costs. During the offline phase, the reduced basis for elliptic problems resp., parabolic problems) is usually built incrementally using a greedy resp., POD-greedy) algorithm [31]. The greedy algorithm chooses the parameters pr n and du n at which snapshots are taen by searching for the largest a posteriori error bound over a training parameter set. In the elliptic case, the snapshots u pr n ) and p du n ) are computed, orthonormalized, and added directly to the basis. In the parabolic case, the X- orthogonal projection of u pr n ) and p du n ), K, onto the current basis is computed, and the largest POD mode of the time history of the projection error is added to the basis. The costs of calculating the FE snapshots during the offline phase are thus 2 -dimensional A)-solves one primal and one dual solve for each ) for the elliptic case, and 2 K -dimensional A)-solves in the parabolic case the cost of time integration without LU-factorization for both the dual and the primal for each ). Here, A) is the FE matrix corresponding to the bilinear form a. Additionally, in order to facilitate efficient online error estimation, the offline phase requires Q a +Q f ) -dimensional solves of the X-inner product matrix denoted

13 S446 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX X) per vector added to the basis. Since the matrix X is parameter-independent, we may precompute its sparse) LU-factorization once at the start of the optimization, allowing the necessary X-solves to be efficiently executed offline. As mentioned in subsection 2.1, we also assume that we have access to α LB ), ), an upper bound on γ a ). If the bilinear form aw, v; ) is symmetric and parametrically coercive, i.e., the Θ q a) > 0 D, 1 q Q a, and a q v, v) 0 w X, 1 q Q a, we may obtain the coercivity lower bound, for example, via the min-theta approach [27], i.e., if we specify the inner product, ) X = a, ; ) for some reference parameter, and can then choose a lower bound on α), and to γ UB a 45) α LB ) = Θ q min a) q {1,...,Q a} Θ q a ). This is also the approach used in the numerical results in section 5. A similar approach can be used to compute γa UB ) if the derivative bilinear forms a i, ; ) are parametrically coercive. However, in the more general setting the successive constraint method may be used [18]. We also assume access to the continuity constant γ d, which may be obtained via a generalized eigenvalue problem. To this end, given v X, dv,w) we first define the supremizer ρ v = arg sup w X w X, noting that ρ v, w) X = dv, w) v X from the Riesz representation theorem, and thus dv, w) sup = ρ v X. w X w X It follows that γ 2 d is the maximum eigenvalue of the generalized eigenproblem γ 2 d = sup v X ρv X v X ) 2 ρ v, ρ v ) X = sup. v X v, v) X Since the bilinear form d is parameter independent, we can compute γ d offline a single time before the optimization. 4. Trust region framewor. The canonical trust region optimization framewor solves a set of successive optimization subproblems, defined as min s M + s ) s.t. s δ, where is the current optimization iterate, M ) is the model function used to approximate the true objective function J), δ is the trust region radius, and we solve for s, the optimal step within the defined trust region; we refer the reader to the boo [9] for an extensive resource on trust region methods. The model function M ) changes at each trust region iteration and is often a local quadratic Taylor expansion. Other surrogates, however, have also been considered in the literature [1, 2, 19, 21, 33]. To determine if the step s should be accepted, the ratio ρ = M ) M +1 ), J ) J +1 ) a measure of how well the model predicts decrease in the true cost, is computed. The value of ρ is used to determine not only whether or not the optimization step is accepted, but also whether and how to change trust region radius for the next optimization subproblem. One criticism of this approach in the POD or general surrogate model context is that the computation of ρ requires evaluating the true objective function J), which may be computationally expensive [2, 33].

14 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S447 In this wor, the reduced basis cost J ) serves as the model function M ). The a posteriori error bounds developed in section 3 are used i) to minimize the number of true objective evaluations required, and ii) together with a recent result from Yue and Meerbergen [33], to guarantee convergence of the approach to the optimum of the high-fidelity model. We stress here that our approach breas from the traditional RB offline/online strategy here; i.e., we generate the RB approximation on the fly during the optimization: we use the online evaluation to efficiently solve the trust region subproblems and update the reduced basis along the optimization trajectory only if the a posteriori error bounds indicate a need to do so. The offline and online stages thus intertwine and each RB update requires an FE snapshot computation and update of the error bound computation as discussed in section Convergence. Standard trust region convergence theory requires i) that the model function m satisfy the first-order condition, i.e., the model function must match the true objective and gradient at the current iterate exactly, and ii) that each iterate of the optimization meet a sufficient decrease condition. It has been shown, however, that trust region optimizations converge even if inexact model and gradient information is used [7, 15, 28]. In [33], Yue and Meerbergen relax the stringent firstorder accuracy requirements to consider the general setting of an unconstrained trust region optimization algorithm using surrogate models with the following properties: 1. a bound on the error in the model function exists over the entire parameter space; 2. at any point within the parameter domain, we may reduce the approximation error to within any given tolerance ɛ > 0; and 3. the model function must be smooth with finite gradient everywhere. Given the above conditions, [33] replaces the first-order condition with the following relaxed first-order condition adapted to our notation from sections 2 and 3: J ) J ) J, ) and J ) J ) J, 46a) ), 46b) J, ) J ) τ J and J, ) J ) τ J for any given τ J > 0 and τ J > 0. There are two parts to this condition: 46a) requires that error bounds exist for both the cost function and its gradient, while 46b) requires that the RB model be able to meet arbitrarily small tolerances τ J and τ J. The sufficient decrease condition is similarly replaced, with an error-aware sufficient decrease condition EASDC): 47) J ) J ) AGC, where AGC is nown as the approximate generalized Cauchy point, a point that achieves sufficient decrease in the RB model in a descent direction. To ensure that all optimization iterates satisfy the EASDC, Yue and Meerbergen present a procedure designed to reject steps which violate this condition [33]. We summarize the procedure in [33] using our notation here and begin by noting that a sufficient condition for 47) is J +1 ) + J, +1 ) + J,+1 +1 ) J ) AGC. However, we do not have access to J,+1 ) +1. Instead, it is sufficient to chec 48) J +1 ) + J, +1 ) < J ) AGC,

15 S448 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX because we may update the RB model with basis functions taen at +1 before the next subproblem solve to ensure that J,+1 ) +1 = 0, thus satisfying the sufficient condition. We can chec this cheaply, and if it holds, we may accept the iterate +1, updating the RB model at +1 as necessary. Otherwise, we note that a necessary condition for 47) is 49) J +1 ) J, +1 ) J,+1 +1 ) J ) AGC, so we chec 50) J +1 ) J, +1 ) J ) AGC. If this condition fails, satisfying 49) may require a large error bound in the next model, leading to inaccurate approximations, so we reject the iterate +1, shrin the trust region set ɛ L = κ tr ɛ L for some κ tr 0, 1)), and re-solve the optimization subproblem. Otherwise, if 50) holds, we update the model at +1 and chec 47). If it holds, then we accept +1. Otherwise, we reject +1, shrin the trust region, and re-solve the optimization subproblem. If the relaxed first-order condition is satisfied, and all iterates satisfy the EASDC, Yue and Meerbergen show convergence of the trust region algorithm to the optimum of the high-fidelity model under mild assumptions [33]: besides constraints on the parameters for the trust region algorithm summarized in Table 1, which are easily satisfied, we also require lower boundedness of the cost functional and Lipschitz continuity of cost gradient and the constraints. To this end, we first note that the Lipschitz continuity of the cost functional and its gradient discussed in section 2 for the FE problem also hold for the RB approximation. Concerning the Lipschitz continuity of the a posteriori error bound appearing in the constraint, we again refer the reader to [11], where the authors consider an elliptic problem and prove the Lipschitz continuity of the dual norm of the primal residual. This proof also directly applies to the dual norm of the adjoint residual, from which we can infer the Lipschitz continuity of the a posteriori error bound. Furthermore, these results can again be extended to the parabolic setting Trust region reduced basis algorithm. The optimization subproblem for the trust region RB algorithm is defined as follows: min J ) J, ) +1 51) s.t. J +1 ) ɛ L, where ɛ L is the maximum allowable relative error in the cost. We note that the error bound on the cost functional, J, ), is used to implicitly define the trust region; if the subproblem solver steps outside of this region, we use bactracing to return to a region where J, ) is sufficiently low. For each subproblem solve, we have two possible termination criteria: either a) the line search method locates a stationary point within the trust region, or b) the line search gets close to the boundary of the current trust region, i.e., 52) a) J ) τsub or b) βɛ L J, ) J ) ɛ L for some small τ sub 0 and for some β 0, 1), generally close to 1. The latter criterion prevents the algorithm from expending too much effort optimizing close to

16 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S449 the trust region boundary where the model becomes inaccurate. Overall convergence is reached when the norm of the true gradient is less than a tolerance τ τ sub, i.e., J ) J ) + J, ) τ. The reduced model employed is an iteratively built RB model that is updated only when the subproblem optimization terminates on condition 52b), indicating that our RB model is not sufficiently accurate. In the elliptic case, updating the RB model entails adding u ) and ψ ) to the primal and dual bases. In doing so, we automatically satisfy 46b), since the reduced basis is able to exactly represent the FE solution. In the parabolic case, we may add singular modes from the primal and dual solutions at the current iterate until 46b) is satisfied. The algorithm steps are summarized in Algorithm 1. Algorithm 1. RB trust region optimization. 1: Initialize. Let = 0, and choose τ τ sub 0, τ J 0, 1), and β 0, 1). Additionally, choose 0, ɛ L, and κ tr < 1, a trust region decrease factor. Initialize the initial reduced basis model at 0. 2: Solve the optimization subproblem 51) with termination criteria 52). 3: if the sufficient condition 48) holds then 4: Accept and update the reduced model at +1 and go to line 15. 5: else if the necessary condition 50) fails then 6: Reject +1, set ɛ L = κ tr ɛ L and return to line 2. 7: else 8: Update the model at +1. 9: if the EASDC 47) holds then 10: Accept +1 and go to line : else 12: Reject +1, set ɛ L = κ tr ɛ L and return to line 2. 13: end if 14: end if 15: If J J,+1 +1 τ, return +1 and stop. Otherwise, go to line umerical tests. In this section, we introduce a thermal fin model optimization problem. The optimization is then solved using three different approaches: 1. an FE-only approach, consisting of an interior point optimizer [3, 4, 32] as implemented in the MATLAB routine fmincon, using the high-dimensional FE model for its function and gradient evaluations; 2. a traditional RB approach, consisting of an offline phase, in which a global reduced basis is built, and an online phase, in which the MATLAB interior point implementation in fmincon is used to solve the optimization using RB function and gradient evaluations; and 3. the trust region RB algorithm presented in subsection 4.2, employing the BFGS quasi-ewton method to solve each trust region subproblem using only reduced evaluations, and solving the full model as needed to progressively build the reduced basis along the optimization trajectory. In subsection 5.2 we present results regarding the quality of the RB approximation employed in the traditional RB approach. Subsection 5.3 compares performance of the three optimization approaches for a two- and six-parameter optimization. The

17 S450 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX Table 1 RB trust region algorithm parameters used in numerical tests. Parameter Symbol Value for numerical tests Close to trust region boundary threshold β 0.95 Trust region boundary ɛ L 0.1 RB gradient error tolerance τ J 0.1 Subproblem convergence tolerance τ sub 1e-8 Overall convergence tolerance τ 5e-4 Fig. 1. Thermal fin geometry. algorithm parameters used for the optimization tests and construction of the reduced bases for the traditional offline/online approach are shown in Table Thermal fin model problem. We consider a two-dimensional thermal fin with a fixed geometry Figure 1) consisting of a central post and four horizontal subfins, with interior Ω and boundary Γ [22]. The fin conducts heat away from a uniform heat flux source at the root of the fin, Γ root, through the post and subfins to the surrounding air. The fin is characterized by a six-dimensional parameter vector = 0, 1, 2, 3, 4, Bi) containing the heat conductivities, i [0.1, 10], of the subfins and the central post and the Biot number, Bi [0.01, 1], a nondimensional heat transfer coefficient relating the convective heat transfer coefficient to the conductivity of the fin. We will consider a two-parameter and a six-parameter optimization. In the two-parameter optimization, we fix 0 = 1 and constrain the subfin conductivities to vary together i.e., 1 = 2 = 3 = 4 ). In the six-parameter optimization, all six components of may vary independently. As the reference parameter, we choose = 1, 1, 1, 1, 1, 0.1) Elliptic model problem. The temperature distribution within the fin, u), is governed by the steady heat equation with a unit eumann flux boundary condition at the root of the fin to model a heat source. Robin boundary conditions on all other external boundaries model convective heat losses, and we enforce continuity of both u and its gradient at interfaces between the fin post and subfins. The output of interest is the average temperature of the fin root, T root ) = Lu)) = Γ root u). For the high-fidelity model we consider a piecewise linear FE approximation space X on a quasi-uniform unstructured mesh of dimension dimx) = = For our optimization, we generate artificial experimental measurements ˆT root by considering a thermal fin whose parameters are fixed but unnown. We then aim to

18 CERTIFIED TRUST REGIO REDUCED BASIS OPTIMIZATIO S451 infer the unnown parameters by minimizing the output least-squares formulation 53) s) = 1 2 T root) ˆT root ) 2 = 1 Lu)) 2 ˆT 2 root. R To obtain a cost function of the form presented in subsection 2.2, we define du, v) 1 2 Lu, Lv) R and lv) Lv, ˆT root )R, drop the constant term 1 2 ˆT root, ˆT root ) R, and introduce the regularization R) = ˆ ˆ 2 R, where ˆ D Parabolic model problem. We now consider the time-varying temperature distribution within the fin in the time interval I = ]0, 10] governed by the time-dependent heat equation with a sinusoidal control input yt) = cost) at the root of the fin. As in the elliptic problem, we enforce Robin boundary conditions at all other external boundaries and continuity of temperature and heat flux at all internal interfaces. In the parabolic problem, our output of interest is the average temperature of the entire fin at the current timestep, T avg) = Lu )) = Ω u ). Again, we generate artificial output data ˆT avg K by considering a fin whose parameters are fixed but unnown. Thus, our output least-squares formulation is given by 54) s) = t 1 2 T avg) ˆT avg) 2 = t 1 L u ) ) 2 ˆT avg 2 R. Analogous to the elliptic case, we may obtain a least-squares cost functional of the form presented in subsection 2.3 by defining du, v) 1 2 Lu, Lv) R and lv) Lv, ˆT avg )R, dropping the constant term 1 2 ˆT avg, ˆT avg ) R, and introducing the regularization R) = ˆ ˆ 2 R, where ˆ D Problem data. In subsection 5.3, we compare the performance of our trust region algorithm to that of the FE-only and RB-only fmincon interior point approaches for the 2D and 6D elliptic and parabolic optimizations. Optimization trials in each case are run on ten different least-squares cost functionals, corresponding to ten randomly selected values for within the parameter domain. For each randomly selected -value, we obtain ˆT root or ˆT avg from the high-fidelity FE model. The value is then used as the regularization function parameter ˆ. Table 2 specifies the data used to generate numerical results in subsequent sections. Table 2 Problem data used for generation of numerical results. The X-inner product was defined as, ) X = a, ; ) for = 1, 1, 1, 1, 1, 0.1) T. Parameter Symbol Elliptic Parabolic FE dimension d-continuity constant γ d umber of timesteps K 100 Regularization scaling factor λ Global reduced basis approximation quality. In order to compare the performance of the proposed trust region RB approach to the performance of a traditional RB approach, we generate a reduced basis offline. We introduce a training set D train D of size n train and an initial parameter 1), and employ a cost-based greedy algorithm to build the reduced basis based on the a posteriori error bounds on the cost and cost gradient. The exact procedure employed is given in Algorithm 2,

19 S452 E. QIA, M. GREPL, K. VEROY, AD K. WILLCOX in which the tolerance values τ and τ J are those required for the optimization see Table 1). This ensures that error everywhere on the training grid is low enough to meet the convergence tolerances. However, we note that this does not guarantee low error over the entire parameter domain. Algorithm 2. Generate global reduced basis. 1: Choose D train D, τ > 0, and τ J > 0. 2: Initialize primal and dual reduced bases at 1) D train. 3: while max J ) Dtrain J ) > τ or max J D ) train J ) > τ J do 4: if max J ) Dtrain J ) > τ then 5: arg max J ) Dtrain J ) 6: else 7: arg max J ) Dtrain J ) 8: end if 9: Update the reduced basis at. 10: end while We now present the standard convergence results for both the two- and the sixparameter cases. Specifically, Tables 3 6 present, as a function of, the maximum relative error bounds pr rel,max, du rel,max, J rel,max, J rel,max, as well as the average effectivities η pr, η du η J, and η J, over a randomly generated test set Ξ D of size n train = 100; i.e., for the elliptic case we have pr rel,max = max Ξ pr ) u) X, du rel,max = max Ξ du ) p), J X rel,max = max Ξ J ) J), J rel,max = max Ξ J pr ) ), J) and η pr ) = e pr ), η du ) = du ), η X e du ) J ) = J ) X e J ), η J) = J e J ). The corresponding definitions for the parabolic case are similar and thus omitted. We observe that the effectivities of the primal bounds are close to 1 for all cases considered, thus indicating very sharp bounds. Dual effectivities are considerably larger, due to the propagation of the primal error to the dual problem and entering into the dual error bound formulation. The error bounds for the cost functional converge quicly, enabling us to achieve the required error tolerance for the trust region approach. Except for small, the cost effectivities have a range of O10 100) for the elliptic case and O ) for the parabolic case, which is acceptable given the fast convergence of the RB approximation. As anticipated, the bounds for the cost gradients have the highest effectivities. However, as discussed in subsection 3.1.2, even fairly large relative errors in the gradient are permissible in the trust region approach, and this result thus poses no impediment for our approach. ) Table 3 2D elliptic thermal fin problem: convergence rate and effectivities of traditional reduced basis. pr rel,max η pr du rel,max η du J rel,max η J J rel,max η J e e e

A certified trust region reduced basis approach to PDE-constrained optimization

A certified trust region reduced basis approach to PDE-constrained optimization J U E 2 0 1 6 P R E P R I T 4 5 3 A certified trust region reduced basis approach to PDE-constrained optimization Elizabeth Qian *, Martin Grepl, Karen Veroy and Karen Willcox * Institut für Geometrie

More information

A CERTIFIED TRUST REGION REDUCED BASIS APPROACH TO PDE-CONSTRAINED OPTIMIZATION

A CERTIFIED TRUST REGION REDUCED BASIS APPROACH TO PDE-CONSTRAINED OPTIMIZATION A CERTIFIED TRUST REGIO REDUCED BASIS APPROACH TO PDE-COSTRAIED OPTIMIZATIO ELIZABETH QIA, MARTI GREPL, KARE VEROY, AD KARE WILLCOX Abstract. Parameter optimization problems constrained by partial differential

More information

A Posteriori Error Estimation for Reduced Order Solutions of Parametrized Parabolic Optimal Control Problems

A Posteriori Error Estimation for Reduced Order Solutions of Parametrized Parabolic Optimal Control Problems A Posteriori Error Estimation for Reduced Order Solutions of Parametrized Parabolic Optimal Control Problems Mark Kärcher 1 and Martin A. Grepl Bericht r. 361 April 013 Keywords: Optimal control, reduced

More information

Certified Reduced Basis Methods for Parametrized Distributed Optimal Control Problems

Certified Reduced Basis Methods for Parametrized Distributed Optimal Control Problems Certified Reduced Basis Methods for Parametrized Distributed Optimal Control Problems Mark Kärcher, Martin A. Grepl and Karen Veroy Preprint o. 402 July 2014 Key words: Optimal control, reduced basis method,

More information

Reduced basis approximation and a posteriori error bounds for 4D-Var data assimilation

Reduced basis approximation and a posteriori error bounds for 4D-Var data assimilation M A R C H 2 0 1 7 P R E P R I T 4 6 3 Reduced basis approximation and a posteriori error bounds for 4D-Var data assimilation Mark Kärcher, Sébastien Boyaval, Martin A. Grepl and Karen Veroy Institut für

More information

A line-search algorithm inspired by the adaptive cubic regularization framework with a worst-case complexity O(ɛ 3/2 )

A line-search algorithm inspired by the adaptive cubic regularization framework with a worst-case complexity O(ɛ 3/2 ) A line-search algorithm inspired by the adaptive cubic regularization framewor with a worst-case complexity Oɛ 3/ E. Bergou Y. Diouane S. Gratton December 4, 017 Abstract Adaptive regularized framewor

More information

A Two-Step Certified Reduced Basis Method

A Two-Step Certified Reduced Basis Method A Two-Step Certified Reduced Basis Method The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Eftang,

More information

Chapitre 1. Cours Reduced Basis Output Bounds Methodes du 1er Semestre

Chapitre 1. Cours Reduced Basis Output Bounds Methodes du 1er Semestre Chapitre 1 Cours Methodes du 1er Semestre 2007-2008 Pr. Christophe Prud'homme christophe.prudhomme@ujf-grenoble.fr Université Joseph Fourier Grenoble 1 1.1 1 2 FEM Approximation Approximation Estimation

More information

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein Universität Konstanz Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems Giulia Fabrini Laura Iapichino Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 364, Oktober 2017 ISSN

More information

Bounding Stability Constants for Affinely Parameter-Dependent Operators

Bounding Stability Constants for Affinely Parameter-Dependent Operators Bounding Stability Constants for Affinely Parameter-Dependent Operators Robert O Connor a a RWTH Aachen University, Aachen, Germany Received *****; accepted after revision +++++ Presented by Abstract In

More information

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein Universität Konstanz Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems Laura Iapichino Stefan Trenz Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 347, Dezember 2015

More information

Evaluation of Flux Integral Outputs for the Reduced Basis Method

Evaluation of Flux Integral Outputs for the Reduced Basis Method Evaluation of Flux Integral Outputs for the Reduced Basis Method Jens L. Eftang and Einar M. Rønquist 20th October 2009 Abstract In this paper, we consider the evaluation of flux integral outputs from

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Reduced Basis Approximation and A Posteriori Error. Estimation for the Time-Dependent Viscous Burgers Equation

Reduced Basis Approximation and A Posteriori Error. Estimation for the Time-Dependent Viscous Burgers Equation Calcolo manuscript o. (will be inserted by the editor) Reduced Basis Approximation and A Posteriori Error Estimation for the Time-Dependent Viscous Burgers Equation goc-cuong guyen 1, Gianluigi Rozza 1,,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD

LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD SIAM J. SCI. COMPUT. Vol. 36, No., pp. A68 A92 c 24 Society for Industrial and Applied Mathematics LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD BENJAMIN PEHERSTORFER, DANIEL BUTNARU, KAREN WILLCOX,

More information

A Greedy Framework for First-Order Optimization

A Greedy Framework for First-Order Optimization A Greedy Framework for First-Order Optimization Jacob Steinhardt Department of Computer Science Stanford University Stanford, CA 94305 jsteinhardt@cs.stanford.edu Jonathan Huggins Department of EECS Massachusetts

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

Variational Formulations

Variational Formulations Chapter 2 Variational Formulations In this chapter we will derive a variational (or weak) formulation of the elliptic boundary value problem (1.4). We will discuss all fundamental theoretical results that

More information

Optimal Newton-type methods for nonconvex smooth optimization problems

Optimal Newton-type methods for nonconvex smooth optimization problems Optimal Newton-type methods for nonconvex smooth optimization problems Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint June 9, 20 Abstract We consider a general class of second-order iterations

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Model Reduction Methods

Model Reduction Methods Martin A. Grepl 1, Gianluigi Rozza 2 1 IGPM, RWTH Aachen University, Germany 2 MATHICSE - CMCS, Ecole Polytechnique Fédérale de Lausanne, Switzerland Summer School "Optimal Control of PDEs" Cortona (Italy),

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS 10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS 769 EXERCISES 10.5.1 Use Taylor expansion (Theorem 10.1.2) to give a proof of Theorem 10.5.3. 10.5.2 Give an alternative to Theorem 10.5.3 when F

More information

Hessian-based model reduction for large-scale systems with initial condition inputs

Hessian-based model reduction for large-scale systems with initial condition inputs Hessian-based model reduction for large-scale systems with initial condition inputs O. Bashir 1, K. Willcox 1,, O. Ghattas 2, B. van Bloemen Waanders 3, J. Hill 3 1 Massachusetts Institute of Technology,

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models

Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models Benjamin Peherstorfer a,, Boris Kramer a, Karen Willcox a a Department of Aeronautics

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth

More information

Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control

Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control SpezialForschungsBereich F 3 Karl Franzens Universität Graz Technische Universität Graz Medizinische Universität Graz Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic

More information

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

A Sobolev trust-region method for numerical solution of the Ginz

A Sobolev trust-region method for numerical solution of the Ginz A Sobolev trust-region method for numerical solution of the Ginzburg-Landau equations Robert J. Renka Parimah Kazemi Department of Computer Science & Engineering University of North Texas June 6, 2012

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Elliptic boundary value problems often occur as the Euler equations of variational problems the latter

More information

Inexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty

Inexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty Inexact Newton Methods Applied to Under Determined Systems by Joseph P. Simonis A Dissertation Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Multigrid and Domain Decomposition Methods for Electrostatics Problems

Multigrid and Domain Decomposition Methods for Electrostatics Problems Multigrid and Domain Decomposition Methods for Electrostatics Problems Michael Holst and Faisal Saied Abstract. We consider multigrid and domain decomposition methods for the numerical solution of electrostatics

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Reduced Basis Methods for Parametrized PDEs A Tutorial Introduction for Stationary and Instationary Problems

Reduced Basis Methods for Parametrized PDEs A Tutorial Introduction for Stationary and Instationary Problems B. Haasdonk Reduced Basis Methods for Parametrized PDEs A Tutorial Introduction for Stationary and Instationary Problems Stuttgart, March 2014, revised version March 2016 Institute of Applied Analysis

More information

Basic Concepts of Adaptive Finite Element Methods for Elliptic Boundary Value Problems

Basic Concepts of Adaptive Finite Element Methods for Elliptic Boundary Value Problems Basic Concepts of Adaptive Finite lement Methods for lliptic Boundary Value Problems Ronald H.W. Hoppe 1,2 1 Department of Mathematics, University of Houston 2 Institute of Mathematics, University of Augsburg

More information

Multiobjective PDE-constrained optimization using the reduced basis method

Multiobjective PDE-constrained optimization using the reduced basis method Multiobjective PDE-constrained optimization using the reduced basis method Laura Iapichino1, Stefan Ulbrich2, Stefan Volkwein1 1 Universita t 2 Konstanz - Fachbereich Mathematik und Statistik Technischen

More information

Greedy Control. Enrique Zuazua 1 2

Greedy Control. Enrique Zuazua 1 2 Greedy Control Enrique Zuazua 1 2 DeustoTech - Bilbao, Basque Country, Spain Universidad Autónoma de Madrid, Spain Visiting Fellow of LJLL-UPMC, Paris enrique.zuazua@deusto.es http://enzuazua.net X ENAMA,

More information

c 2016 Society for Industrial and Applied Mathematics

c 2016 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 38, No. 5, pp. A363 A394 c 206 Society for Industrial and Applied Mathematics OPTIMAL MODEL MANAGEMENT FOR MULTIFIDELITY MONTE CARLO ESTIMATION BENJAMIN PEHERSTORFER, KAREN WILLCOX,

More information

Toni Lassila 1, Andrea Manzoni 1 and Gianluigi Rozza 1

Toni Lassila 1, Andrea Manzoni 1 and Gianluigi Rozza 1 Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher ON THE APPROIMATION OF STABILITY FACTORS FOR GENERAL PARAMETRIZED PARTIAL DIFFERENTIAL

More information

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical

More information

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,

More information

Sequential Minimal Optimization (SMO)

Sequential Minimal Optimization (SMO) Data Science and Machine Intelligence Lab National Chiao Tung University May, 07 The SMO algorithm was proposed by John C. Platt in 998 and became the fastest quadratic programming optimization algorithm,

More information

Submitted to SISC; revised August A SPACE-TIME PETROV-GALERKIN CERTIFIED REDUCED BASIS METHOD: APPLICATION TO THE BOUSSINESQ EQUATIONS

Submitted to SISC; revised August A SPACE-TIME PETROV-GALERKIN CERTIFIED REDUCED BASIS METHOD: APPLICATION TO THE BOUSSINESQ EQUATIONS Submitted to SISC; revised August 2013. A SPACE-TIME PETROV-GALERKI CERTIFIED REDUCED BASIS METHOD: APPLICATIO TO THE BOUSSIESQ EQUATIOS MASAYUKI YAO Abstract. We present a space-time certified reduced

More information

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED ALAN DEMLOW Abstract. Recent results of Schatz show that standard Galerkin finite element methods employing piecewise polynomial elements of degree

More information

Reduced Basis Methods for Parameterized Partial Differential Equations with Stochastic Influences Using the Karhunen-Loève Expansion

Reduced Basis Methods for Parameterized Partial Differential Equations with Stochastic Influences Using the Karhunen-Loève Expansion Reduced Basis Methods for Parameterized Partial Differential Equations with Stochastic Influences Using the Karhunen-Loève Expansion Bernard Haasdonk, Karsten Urban und Bernhard Wieland Preprint Series:

More information

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS LONG CHEN In this chapter we discuss iterative methods for solving the finite element discretization of semi-linear elliptic equations of the form: find

More information

Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems

Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems Gert Lube 1, Tobias Knopp 2, and Gerd Rapin 2 1 University of Göttingen, Institute of Numerical and Applied Mathematics (http://www.num.math.uni-goettingen.de/lube/)

More information

A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS

A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS MICHAEL HOLST AND FAISAL SAIED Abstract. We consider multigrid and domain decomposition methods for the numerical

More information

Certified Reduced Basis Approximation for Parametrized Partial Differential Equations and Applications

Certified Reduced Basis Approximation for Parametrized Partial Differential Equations and Applications Certified Reduced Basis Approximation for Parametrized Partial Differential Equations and Applications Alfio Quarteroni,, Gianluigi Rozza and Andrea Manzoni Modelling and Scientific Computing, CMCS, Mathematics

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Adjoint-Fueled Advances in Error Estimation for Multiscale, Multiphysics Systems

Adjoint-Fueled Advances in Error Estimation for Multiscale, Multiphysics Systems Adjoint-Fueled Advances in Error Estimation for Multiscale, Multiphysics Systems Donald Estep University Interdisciplinary Research Scholar Department of Statistics Department of Mathematics Center for

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization

How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization Frank E. Curtis Department of Industrial and Systems Engineering, Lehigh University Daniel P. Robinson Department

More information

A line-search algorithm inspired by the adaptive cubic regularization framework, with a worst-case complexity O(ɛ 3/2 ).

A line-search algorithm inspired by the adaptive cubic regularization framework, with a worst-case complexity O(ɛ 3/2 ). A line-search algorithm inspired by the adaptive cubic regularization framewor, with a worst-case complexity Oɛ 3/. E. Bergou Y. Diouane S. Gratton June 16, 017 Abstract Adaptive regularized framewor using

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

A posteriori error control for the binary Mumford Shah model

A posteriori error control for the binary Mumford Shah model A posteriori error control for the binary Mumford Shah model Benjamin Berkels 1, Alexander Effland 2, Martin Rumpf 2 1 AICES Graduate School, RWTH Aachen University, Germany 2 Institute for Numerical Simulation,

More information

Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization

Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization Xiaocheng Tang Department of Industrial and Systems Engineering Lehigh University Bethlehem, PA 18015 xct@lehigh.edu Katya Scheinberg

More information

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017 Machine Learning Support Vector Machines Fabio Vandin November 20, 2017 1 Classification and Margin Consider a classification problem with two classes: instance set X = R d label set Y = { 1, 1}. Training

More information

1. Introduction. The problem of interest is a system of nonlinear equations

1. Introduction. The problem of interest is a system of nonlinear equations SIAM J. NUMER. ANAL. Vol. 46, No. 4, pp. 2112 2132 c 2008 Society for Industrial and Applied Mathematics INEXACT NEWTON DOGLEG METHODS ROGER P. PAWLOWSKI, JOSEPH P. SIMONIS, HOMER F. WALKER, AND JOHN N.

More information

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Geometric Multigrid Methods

Geometric Multigrid Methods Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

arxiv: v1 [math.oc] 23 Oct 2017

arxiv: v1 [math.oc] 23 Oct 2017 Stability Analysis of Optimal Adaptive Control using Value Iteration Approximation Errors Ali Heydari arxiv:1710.08530v1 [math.oc] 23 Oct 2017 Abstract Adaptive optimal control using value iteration initiated

More information

arxiv: v1 [math.na] 9 Oct 2018

arxiv: v1 [math.na] 9 Oct 2018 PRIMAL-DUAL REDUCED BASIS METHODS FOR CONVEX MINIMIZATION VARIATIONAL PROBLEMS: ROBUST TRUE SOLUTION A POSTERIORI ERROR CERTIFICATION AND ADAPTIVE GREEDY ALGORITHMS arxiv:1810.04073v1 [math.na] 9 Oct 2018

More information

Model Reduction Methods

Model Reduction Methods Martin A. Grepl 1, Gianluigi Rozza 2 1 IGPM, RWTH Aachen University, Germany 2 MATHICSE - CMCS, Ecole Polytechnique Fédérale de Lausanne, Switzerland Summer School "Optimal Control of PDEs" Cortona (Italy),

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations Abstract and Applied Analysis Volume 212, Article ID 391918, 11 pages doi:1.1155/212/391918 Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations Chuanjun Chen

More information

Konstantinos Chrysafinos 1 and L. Steven Hou Introduction

Konstantinos Chrysafinos 1 and L. Steven Hou Introduction Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher ANALYSIS AND APPROXIMATIONS OF THE EVOLUTIONARY STOKES EQUATIONS WITH INHOMOGENEOUS

More information

Parameterized Partial Differential Equations and the Proper Orthogonal D

Parameterized Partial Differential Equations and the Proper Orthogonal D Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal

More information

R T (u H )v + (2.1) J S (u H )v v V, T (2.2) (2.3) H S J S (u H ) 2 L 2 (S). S T

R T (u H )v + (2.1) J S (u H )v v V, T (2.2) (2.3) H S J S (u H ) 2 L 2 (S). S T 2 R.H. NOCHETTO 2. Lecture 2. Adaptivity I: Design and Convergence of AFEM tarting with a conforming mesh T H, the adaptive procedure AFEM consists of loops of the form OLVE ETIMATE MARK REFINE to produce

More information

Space-time sparse discretization of linear parabolic equations

Space-time sparse discretization of linear parabolic equations Space-time sparse discretization of linear parabolic equations Roman Andreev August 2, 200 Seminar for Applied Mathematics, ETH Zürich, Switzerland Support by SNF Grant No. PDFMP2-27034/ Part of PhD thesis

More information

A posteriori error estimation for elliptic problems

A posteriori error estimation for elliptic problems A posteriori error estimation for elliptic problems Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

Numerical algorithms for one and two target optimal controls

Numerical algorithms for one and two target optimal controls Numerical algorithms for one and two target optimal controls Sung-Sik Kwon Department of Mathematics and Computer Science, North Carolina Central University 80 Fayetteville St. Durham, NC 27707 Email:

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

and finally, any second order divergence form elliptic operator

and finally, any second order divergence form elliptic operator Supporting Information: Mathematical proofs Preliminaries Let be an arbitrary bounded open set in R n and let L be any elliptic differential operator associated to a symmetric positive bilinear form B

More information

Trust Regions. Charles J. Geyer. March 27, 2013

Trust Regions. Charles J. Geyer. March 27, 2013 Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but

More information

Some recent developments on ROMs in computational fluid dynamics

Some recent developments on ROMs in computational fluid dynamics Some recent developments on ROMs in computational fluid dynamics F. Ballarin 1, S. Ali 1, E. Delgado 2, D. Torlo 1,3, T. Chacón 2, M. Gómez 2, G. Rozza 1 1 mathlab, Mathematics Area, SISSA, Trieste, Italy

More information