Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Size: px
Start display at page:

Download "Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization"

Transcription

1 Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas March 24, 2006 Rice University CAAM699 Seminar University of Houston D. Ridzal Inexact TR SQP for Large Scale Optimization 1

2 Outline Motivation Large Scale Problems in PDE Constrained Optimization Inexactness in linear system solves arising in an SQP algorithm Trust Region SQP Algorithm with Inexact Linear System Solves Existing Work on Inexactness in Optimization Algorithms Review of the SQP Methodology Mechanisms of Inexactness Control Numerical Results Conclusion D. Ridzal Inexact TR SQP for Large Scale Optimization 2

3 Motivation: A PDE Constrained Optimization Problem 10 5 Velocity Field u subject to minimize 1 2 Ω c 2 dω + α2 2 Computed Concentration c Ω c ( v) 2 + v 2 dγ ρ(u u) µ ( u + u T ) + p = 0 in Ω, u = 0 in Ω, pn + µ ( u + u T ) n = 0 on Ω o, u = 0 on Ω \ ( Ω c Ω o ), u = vn on Ω c, (ɛ c) + u c = f on Ω, c = 0 on Ω \ Ω c, n ɛ c = g on Ω c. D. Ridzal Inexact TR SQP for Large Scale Optimization 3

4 Large Scale Optimization Problems: Common Features Other applications: optimal design / shape optimization, parameter estimation, inverse problems. Common features: can be solved as constrained nonlinear programming problems (NLPs) using all-at-once techniques number of variables can easily be in the millions in 3D the discretized linear operators are often not available in matrix form even if available explicitly, the resulting linear systems usually require specialized solvers, such as multigrid or domain decomposition regardless of which optimization algorithm is used, linear systems must be solved iteratively! D. Ridzal Inexact TR SQP for Large Scale Optimization 4

5 Use of Sequential Quadratic Programming Methods SQP methods have been used successfully for the solution of smooth NLPs in R n. Most available SQP codes (NPSOL, SNOPT, KNITRO, LOQO) are based on direct (dense or sparse) linear algebra. impossible to apply to many large scale optimization problems, in particular PDE constrained optimization problems not suitable for parallel computing environments Contribution: Incorporated iterative linear algebra in an SQP framework:!!! iterative linear system solvers are inherently inexact rigorous theoretical analysis of inexactness within an SQP algorithm practical approaches to inexactness control D. Ridzal Inexact TR SQP for Large Scale Optimization 5

6 Outline Motivation Large Scale Problems in PDE Constrained Optimization Inexactness in linear system solves arising in an SQP algorithm Trust Region SQP Algorithm with Inexact Linear System Solves Existing Work on Inexactness in Optimization Algorithms Review of the SQP Methodology Mechanisms of Inexactness Control Numerical Results Conclusion D. Ridzal Inexact TR SQP for Large Scale Optimization 5

7 Inexactness in Optimization Algorithms: Existing Work Early results for inexact Newton methods in optimization: e.g. Dembo, Eisenstat, Steihaug, Dennis, Walker (1980s) Connection with inexact SQP methods: Dembo and Tulowitzki (1985) and Fontecilla (1985), limited to local convergence analysis! Global results for inexact Newton methods for nonlinear equations: e.g. Brown and Saad (1990,1994), Eisenstat and Walker (1994) Jäger and Sachs (1997) line search reduced space SQP first global convergence result dependence on Lipschitz constants and derivative bounds Biros and Ghattas (2002) quasi Newton reduced space SQP dependence on derivative bounds Heinkenschloss and Vicente (2001) reduced space TRSQP established a theoretical convergence framework that does not rely on Lipschitz constants or derivative bounds limited to the reduced space SQP approach D. Ridzal Inexact TR SQP for Large Scale Optimization 6

8 Review of Trust-Region SQP Solve NLP: min f(x) s.t. c(x) = 0 where f : X R and c : X Y, for some Hilbert spaces X and Y, and f and c are twice continuously Fréchet differentiable. define Lagrangian functional L : X Y R: L(x, λ) = f(x) + λ, c(x) Y if regular point x is a local solution of the NLP, then there exists a λ Y satisfying the 1st order necessary optimality conditions: x f(x ) + c x (x ) λ = 0 c(x ) = 0 D. Ridzal Inexact TR SQP for Large Scale Optimization 7

9 Newton s method applied to the 1st order optimality conditions: ( ) ( ) ( ) xx L(x k, λ k ) c x (x k ) s x k x f(x c x (x k ) 0 s λ = k ) + c x (x k ) λ k k c(x k ) If xx L(x k, λ k ) is positive definite on the null space of c x (x k ), the above KKT system is necessary and sufficient for the solution of the quadratic programming problem (QP): min 1 2 xxl(x k, λ k )s x k, s x k X + x L(x k, λ k ), s x k X s.t. c x (x k )s x k + c(x k ) = 0 To globalize the convergence, we add a trust region constraint: min 1 2 H ks x k, s x k X + x L k, s x k X s.t. c x (x k )s x k + c(x k ) = 0 s x k X k. Possible incompatibility of constraints: Composite Step Approach. D. Ridzal Inexact TR SQP for Large Scale Optimization 8

10 Composite Step Approach for the Solution of the Quadratic Subproblem TR SQP step: s k = n k + t k quasi-normal step n k : moves toward feasibility tangential step t k : moves toward optimality while staying in the null space of the linearized constraints t k n k ζ k k c x(x k )s x + c(x k ) = 0 c x(x k )t = 0 e.g. Omojokun [1989], Byrd, Hribar, Nocedal [1997], Dennis, El Alem, Maciel [1997], Dennis, Heinkenschloss, Vicente [1998], Conn, Gould, Toint [2000] D. Ridzal Inexact TR SQP for Large Scale Optimization 9

11 Acceptance of the Step Merit function: φ(x, λ; ρ) = f(x) + λ, c(x) Y + ρ c(x) 2 Y = L(x, λ) + ρ c(x) 2 Y. Actual reduction at step k: ared(s x k; ρ k ) = φ(x k, λ k ; ρ k ) φ(x k + s k, λ k+1 ; ρ k ) Predicted reduction at step k: [ pred(s x k; ρ k ) = φ(x k, λ k ; ρ k ) L(x k, λ k )+ g k, s k X H ks x k, s x k X ] + λ k+1 λ k, c x (x k )s x k + c(x k ) Y + ρ k c x (x k )s x k + c(x k ) 2 Y. D. Ridzal Inexact TR SQP for Large Scale Optimization 10

12 Composite Step Trust Region SQP Algorithm 1. Compute quasi normal step n k. 2. Compute tangential step t k. 3. Compute new Lagrange multiplier estimate λ k Update penalty parameter ρ k. 5. Compute ared k, pred k. 6. Decide whether to accept the new iterate x k+1 = x k + n k + t k, and update k+1 from k, based on ared k pred k. D. Ridzal Inexact TR SQP for Large Scale Optimization 11

13 Composite Step Trust Region SQP Algorithm 1. Compute quasi normal step n k. One linear system involving c x (x k ). Possible inexactness! 2. Compute tangential step t k. 3. Compute new Lagrange multiplier estimate λ k+1. One linear system involving c x (x k ). Possible inexactness! 4. Update penalty parameter ρ k. 5. Compute ared k, pred k. 6. Decide whether to accept the new iterate x k+1 = x k + n k + t k, and update k+1 from k, based on ared k pred k. D. Ridzal Inexact TR SQP for Large Scale Optimization 11

14 Composite Step Trust Region SQP Algorithm 1. Compute quasi normal step n k. One linear system involving c x (x k ). Possible inexactness! 2. Compute tangential step t k. Multiple linear systems involving c x (x k ). Possible inexactness! Depends on already (inexactly) computed quantities n k and λ k. 3. Compute new Lagrange multiplier estimate λ k+1. One linear system involving c x (x k ). Possible inexactness! 4. Update penalty parameter ρ k. 5. Compute ared k, pred k. 6. Decide whether to accept the new iterate x k+1 = x k + n k + t k, and update k+1 from k, based on ared k pred k. D. Ridzal Inexact TR SQP for Large Scale Optimization 11

15 Composite Step Trust Region SQP Algorithm 1. Compute quasi normal step n k. One linear system involving c x (x k ). Possible inexactness! 2. Compute tangential step t k. Multiple linear systems involving c x (x k ). Possible inexactness! Depends on already (inexactly) computed quantities n k and λ k. 3. Compute new Lagrange multiplier estimate λ k+1. One linear system involving c x (x k ). Possible inexactness! 4. Update penalty parameter ρ k. Need to modify penalty parameter update! 5. Compute ared k, pred k. The definition of pred k must be modified! 6. Decide whether to accept the new iterate x k+1 = x k + n k + t k, and update k+1 from k, based on ared k pred k. D. Ridzal Inexact TR SQP for Large Scale Optimization 11

16 Balancing Inexactness in the Quasi Normal and the Tangential Step c x (x k )s x k + c(x k) = 0 c x (x k )t k = 0 ζ k k D. Ridzal Inexact TR SQP for Large Scale Optimization 12

17 Balancing Inexactness in the Quasi Normal and the Tangential Step c x (x k )s x k + c(x k) = 0 c x (x k )t k = 0 ζ k k D. Ridzal Inexact TR SQP for Large Scale Optimization 12

18 Inexactness in TRSQP: Summary of My Contributions Iterative linear system solves arise in the computation of: (1) Lagrange multipliers, (2) quasi-normal step, (3) tangential step. Global convergence theory for TR/SQP methods gives a rather generic treatment of the issue of inexactness. My work ties these generic requirements to inexactness specific to linear system solves, for each of the above. The devised stopping criteria for iterative linear system solves are dynamically adjusted by the SQP algorithm, based on its current progress toward a KKT point, trade gains in feasibility for gains in optimality and vice versa, can be easily implemented and are sufficient to guarantee first order global convergence of the algorithm, allow for a rigorous integration of preconditioners for KKT systems. D. Ridzal Inexact TR SQP for Large Scale Optimization 13

19 Tangential Step The exact model requires that t k approximately solve the problem: min 1 2 H k(t + n k ), t + n k X + x L k, t + n k X s.t. c x (x k )t = 0 t + n k X k. Assume that there exists a bounded linear operator W k : Z X, where Z is a Hilbert space, such that Range(W k ) = Null(c x (x k )). Covers all existing implementations for handling c x(x k )t = 0. Drop constant term from the QP, ignore n k in the trust region constraint, set g k = H k n k + x L k, let t = W k w. Obtain equivalent reduced QP min q k (w) 1 2 W k H k W k w, w Z + W k g k, w Z s.t. W k w X k. D. Ridzal Inexact TR SQP for Large Scale Optimization 14

20 Tangential Step Steihaug Toint CG 0. Let w 0 = 0 Z. Let r 0 = W k g k, p 0 = r For i = 0, 1, 2, If p i, W k H k W k p i Z 0, extend w i to boundary of TR and stop. 1.2 α i = r i, r i Z / p i, W k H k W k p i Z 1.3 w i+1 = w i + α ip i 1.4 If W k w i+1 k, extend w i to boundary of TR and stop. 1.5 r i+1 = r i α iw k H k W k p i 1.6 β i = r i+1, r i+1 Z / r i, r i Z 1.7 p i+1 = r i+1 + β ip i D. Ridzal Inexact TR SQP for Large Scale Optimization 15

21 Tangential Step Linear Systems The application of W k, Wk requires linear system solves. Example: W k is an orthogonal projector onto Null(c x (x k )). Any computation z = W k p can be performed by solving the augmented system ( ) ( ) ( ) I c x (x k ) z p = c x (x k ) 0 y 0 If I is replaced by G k H k, and W k G kw k is positive definite, this leads to the preconditioning of the reduced Hessian W k H kw k [Keller, Gould, Wathen 2000]. Attractive if we have a good preconditioner for KKT systems: [H., Nguyen 2004], [Bartlett, H., R., van Bloemen Waanders 2006]. We have the tools to efficiently solve large scale KKT systems or above augmented systems iteratively. D. Ridzal Inexact TR SQP for Large Scale Optimization 16

22 Tangential Step with Inexactness (Projector Case) Issues: Augmented systems are solved iteratively. Every CG iteration uses a different W k. The CG operator W k H kw k is nonsymmetric. The CG operator W k H kw k is effectively nonlinear. Which quadratic functional are we minimizing? Conventional proofs of global convergence for SQP methods require us to replace the reduced QP with the following inexact problem: min 1 W k 2 kw k w, w + W k g k, w Z Z s.t. w X k. W k H kw k =? W k g k =? D. Ridzal Inexact TR SQP for Large Scale Optimization 17

23 Tangential Step with Inexactness (Projector Case) Outline of the Solution: Use a full space approach, in which the CG operator is H k (exact), and the inexactness is moved into a preconditioner W k (inexact). min 1 2 H kt, t X + g k, t X s.t. t Range(W k ), t X k. Find a fixed (with respect to every CG iteration) linear representation W k = W k + E k of the inexact null space operator. W k H kw k = W k Hk Wk, W k g k = W k gk Establish bounds on E k that can be controlled in practice. D. Ridzal Inexact TR SQP for Large Scale Optimization 18

24 Tangential Step Inexact CG with Full Orthogonalization 0. Let t 0 = 0 X. Let r 0 = g k. Set i max, set i = While (W k (r i ) 0 and i < i max ) 1.1 z i = W k (r i) 1.2 p i = z i + i 1 z i,h k p j X j=0 p j p j,h k p j X 1.3 If p i, H k p i X 0, extend t i to boundary of TR and stop. 1.4 α i = r i, p i X / p i, H k p i X 1.5 t i+1 = t i + α ip i 1.6 If t i+1 k, extend t i to boundary of TR and stop. 1.7 r i+1 = r i + α ih k p i 1.8 i i + 1 D. Ridzal Inexact TR SQP for Large Scale Optimization 19

25 Inexact CG with Full Orthogonalization Theory Theorem (1) If W k = W k is a fixed (exact) linear operator, then the inexact CG algorithm in the full space is equivalent to a traditional Steih./T. CG algorithm applied to the tangential subproblem in the reduced space. Proof. Straightforward. If linear system solves can be performed with high accuracy, we recover the convergence properties of traditional CG. D. Ridzal Inexact TR SQP for Large Scale Optimization 20

26 Inexact CG with Full Orthogonalization Theory Theorem (2) There exists a fixed linear operator W k such that W k (r i ) = W k r i for every iteration i of the inexact CG algorithm. Proof. It can be shown that residual vectors r i, i = 0, 1,..., m, are linearly independent, so the matrix R m = [r 0, r 1,..., r m] has full column rank. Introduce matrices Y m = [W k r 0, W k r 1,..., W k r m], Ỹ m = [W k (r 0), W k (r 1),..., W k (r m)]. Inexact operator (one possible choice): W k = W k + (Ỹm Ym)(R mr m) 1 R m, since W k R m = Ỹm. D. Ridzal Inexact TR SQP for Large Scale Optimization 21

27 Inexact CG with Full Orthogonalization Theory Theorem (2) There exists a fixed linear operator W k such that W k (r i ) = W k r i for every iteration i of the inexact CG algorithm. Inexact CG effectively solves the inexact tangential subproblem: min 1 Wk Hk Wk w, w + Wk gk, w 2 Z Z s.t. W k w X k. Use conventional theory for global convergence of SQP methods. Remark: For analytical purposes, we use the inexact operator W k = W k + E k = W k + (Ỹm Y m )(Ỹm Rm ) 1 Ỹ m (after establishing the conditions for the invertibility of Ỹm Rm ). D. Ridzal Inexact TR SQP for Large Scale Optimization 21

28 Tangential Step Global Convergence Requirements (C1) W k gk Wk g k X κ 1 min ( W ) k gk X, k, (C2) (C3) 1 2 Wk Hk Wk w k, w k Wk Hk Wk w k, w k X κ 2 w k 2 X, X κ 3 W k gk X min Wk gk, w k X {κ 4 W k gk X, κ 5 k }, for positive constants κ 1,..., κ 5 independent of k. D. Ridzal Inexact TR SQP for Large Scale Optimization 22

29 Tangential Step Global Convergence Requirements (C1) W k gk Wk g k X κ 1 min ( W ) k gk X, k, (C2) (C3) 1 2 Wk Hk Wk w k, w k Wk Hk Wk w k, w k X κ 2 w k 2 X, X κ 3 W k gk X min Wk gk, w k X {κ 4 W k gk X, κ 5 k }, for positive constants κ 1,..., κ 5 independent of k. The true difficulty is in proving the global convergence condition (C1), related to the inexact reduced gradient. D. Ridzal Inexact TR SQP for Large Scale Optimization 22

30 Inexact CG with Full Orthogonalization Theory Theorem (3) If at every iteration i of the inexact CG algorithm { W k r i W k r i ξ min W } k g k g k, k g k, β W k r i, ξ > 0, and c 1 W k gk W k g k c 2 W k gk, c 1, c 2 > 0, then the convergence requirements (C1) (C2) are satisfied. Proof. Relies on a bound for the quantity E k = (Ỹm Ym)(Ỹm Rm) 1 Ỹ m. Notes: (1) Even though the inexact reduced gradient W k g k is computed in the very first CG iteration, in order to guarantee (C1) our theoretical framework puts restrictions on all subsequent applications of W k. (2) The theorem gives a sufficient condition that works extremely well in practice. D. Ridzal Inexact TR SQP for Large Scale Optimization 23

31 Application of the Inexact Operator W k Recall: (i) At every iteration k of the SQP algorithm, inexact CG is called. (ii) At every CG iteration i, we compute iteratively an inexact projected residual z i = W k (r i ) = W k r i such that ( I c x (x k ) c x (x k ) 0 ) ( zi y ) = ( ri 0 ) + ( e 1 i e 2 i ). D. Ridzal Inexact TR SQP for Large Scale Optimization 24

32 Application of the Inexact Operator W k Recall: (i) At every iteration k of the SQP algorithm, inexact CG is called. (ii) At every CG iteration i, we compute iteratively an inexact projected residual z i = W k (r i ) = W k r i such that ( I c x (x k ) c x (x k ) 0 ) ( zi y ) = ( ri 0 ) + ( e 1 i e 2 i ). Control global SQP convergence by controlling e i! D. Ridzal Inexact TR SQP for Large Scale Optimization 24

33 Application of the Inexact Operator W k Recall: (i) At every iteration k of the SQP algorithm, inexact CG is called. (ii) At every CG iteration i, we compute iteratively an inexact projected residual z i = W k (r i ) = W k r i such that ( I c x (x k ) c x (x k ) 0 ) ( zi y ) = ( ri 0 ) + ( e 1 i e 2 i Theory: If at every iteration i of the inexact CG algorithm { W k r i W k r i ξ min W } k g k g k, k g k, β W k r i, ξ > 0, and c 1 W k gk W k g k c 2 W k gk, c 1, c 2 > 0, then the convergence requirements (C1) (C2) are satisfied. ). D. Ridzal Inexact TR SQP for Large Scale Optimization 24

34 Application of the Inexact Operator W k Recall: (i) At every iteration k of the SQP algorithm, inexact CG is called. (ii) At every CG iteration i, we compute iteratively an inexact projected residual z i = W k (r i ) = W k r i such that ( I c x (x k ) c x (x k ) 0 ) ( zi y ) = ( ri 0 ) + ( e 1 i e 2 i Practice: It is sufficient to require { e i min W } k g k g k, k g k, β z i, }{{} γ where β = 10 3 (fixed small constant). Note W k g k = z 0. ). D. Ridzal Inexact TR SQP for Large Scale Optimization 24

35 Application of the Inexact Operator W k Recall: (i) At every iteration k of the SQP algorithm, inexact CG is called. (ii) At every CG iteration i, we compute iteratively an inexact projected residual z i = W k (r i ) = W k r i such that ( I c x (x k ) c x (x k ) 0 ) ( zi y ) = Implementation: First CG iteration Stop the linear system solver at iteration m, if e (m) 0 γ z (m) 0. ( ri 0 ) + ( e 1 i e 2 i Subsequent CG iterations Heuristics: reuse the size of the iterate returned by the previous solve, e i γ z i 1. ). D. Ridzal Inexact TR SQP for Large Scale Optimization 24

36 Outline Motivation Large Scale Problems in PDE Constrained Optimization Inexactness in linear system solves arising in an SQP algorithm Trust Region SQP Algorithm with Inexact Linear System Solves Existing Work on Inexactness in Optimization Algorithms Review of the SQP Methodology Mechanisms of Inexactness Control Numerical Results Conclusion D. Ridzal Inexact TR SQP for Large Scale Optimization 25

37 Example 1: Burgers Equation in 1D subject to min (y(x) y d (x)) 2 dx + α u 2 (x)dx νy xx (x) + y(x)y x (x) = f(x) + u(x) x (0, 1) y(0) = 0, y(1) = 0. Finite element discretization with linear elements. ν = 10 2, α = 10 5, 100 equidistant subintervals. SQP stopping criteria: c(x k ) < 10 6, x L(x k, λ k ) < For augmented system solves use GMRES with incomplete LU preconditioning. D. Ridzal Inexact TR SQP for Large Scale Optimization 26

38 Example 1 Inexactness Control in Tang. Step 10 4 absolute inner solver stopping tol CG iterations (over all SQP iterations) Controlled tolerance in first CG iteration (one for every SQP iteration). * Controlled tolerance in all other CG iterations. D. Ridzal Inexact TR SQP for Large Scale Optimization 27

39 Example 1 Inexactness Control in Tang. Step 10 4 absolute inner solver stopping tol CG iterations (over all SQP iterations) Total number of GMRES iterations: Runtime: 11 seconds. D. Ridzal Inexact TR SQP for Large Scale Optimization 27

40 Example 1 Inexactness Control in Tang. Step 10 4 absolute inner solver stopping tol CG iterations (over all SQP iterations) How do we pick a fixed tolerance for comparison? D. Ridzal Inexact TR SQP for Large Scale Optimization 27

41 Example 1 Inexactness Control in Tang. Step 10 4 absolute inner solver stopping tol CG iterations (over all SQP iterations) Pick the largest tolerance that recovers the same convergence profile (in terms of the number of SQP iterations and the quality of the solution). D. Ridzal Inexact TR SQP for Large Scale Optimization 27

42 Example 1 Inexactness Control in Tang. Step 10 4 absolute inner solver stopping tol CG iterations (over all SQP iterations) Fixed Tolerance: Total number of GMRES iterations: 5652 (was 2544). Runtime: 33 seconds (was 11). D. Ridzal Inexact TR SQP for Large Scale Optimization 27

43 Example 1 Inexactness Control in Tang. Step 10 3 relative inner solver stopping tol CG iterations (over all SQP iterations) Relative linear solver stopping never need to surpass the desired SQP stopping tolerances! D. Ridzal Inexact TR SQP for Large Scale Optimization 28

44 Example 2: Nonlinear Elliptic Problem in 2D subject to minimize 1 y 0 (x)) 2 Ω(y(x) 2 dx + 1 u 2 (x)dx 2 Ω y(x) + y 3 (x) y(x) = 0 in Ω, y (x) = u(x) n on Ω. The computational domain is the [0, 1] [0, 1] square. Unstructured meshes generated by Triangle, partitioned using Metis. Mesh sizes: 32K, 64K, 128K, 256K ( total number of variables). Partition sizes: 2, 4, 8, 16 (= number of processors). For augmented system solves use GMRES with DD preconditioning. Beowulf cluster (Mike Heroux, CSBSJU, MN and Sandia, NM): 16 Athlon 2.0GHz nodes / 1GB RAM / 100 Mbps Ethernet D. Ridzal Inexact TR SQP for Large Scale Optimization 29

45 Example 2 Inexactness Control in Tang. Step absolute inner solver stopping tol CG iterations Controlled tolerance in first CG iteration (one for every SQP iteration). * Controlled tolerance in all other CG iterations. D. Ridzal Inexact TR SQP for Large Scale Optimization 30

46 Example 2 Inexactness Control in Tang. Step absolute inner solver stopping tol CG iterations How do we pick a fixed tolerance for comparison? D. Ridzal Inexact TR SQP for Large Scale Optimization 30

47 Example 2 Inexactness Control in Tang. Step absolute inner solver stopping tol CG iterations Pick the largest tolerance that recovers the same convergence profile (in terms of the number of SQP iterations and the quality of the solution). Fixed Tolerance: D. Ridzal Inexact TR SQP for Large Scale Optimization 30

48 Example 2 Inexactness Control in Tang. Step Total Number of GMRES Iterations: Fixed Tol / Controlled Tol Mesh \ Part K 297/ / / /402 64K 254/ / / / K 378/ / / / K 425/ / / /665 Savings 30% (tangential step computation only) Wall Time in Seconds: Fixed Tol / Controlled Tol Mesh \ Part K 51/46 41/34 44/40 60/75 64K 82/71 57/47 53/47 63/58 128K 268/ / / / K 661/ / / /182 Savings 15% D. Ridzal Inexact TR SQP for Large Scale Optimization 31

49 Example 3: Navier Stokes Problem in 2D Finite element discretization with the Taylor Hood element pair. ν = , α = 10 1, δ = SQP stopping criteria: c(x k ) < 10 6, x L(x k, λ k ) < For augmented system solves use GMRES with incomplete LU preconditioning (drop tolerance ). Use full reorthogonalization for all tangential step computations. D. Ridzal Inexact TR SQP for Large Scale Optimization 32

50 Example 3 Inexactness Control in Tang. Step absolute inner solver stopping tol CG iterations (over all SQP iterations) Controlled tolerance in first CG iteration (one for every SQP iteration). * Controlled tolerance in all other CG iterations. D. Ridzal Inexact TR SQP for Large Scale Optimization 33

51 Example 3 Inexactness Control in Tang. Step absolute inner solver stopping tol CG iterations (over all SQP iterations) Total number of GMRES iterations: D. Ridzal Inexact TR SQP for Large Scale Optimization 33

52 Example 3 Inexactness Control in Tang. Step absolute inner solver stopping tol CG iterations (over all SQP iterations) How do we pick a fixed tolerance for comparison? D. Ridzal Inexact TR SQP for Large Scale Optimization 33

53 Example 3 Inexactness Control in Tang. Step absolute inner solver stopping tol CG iterations (over all SQP iterations) Pick the largest tolerance, by trial and error, that recovers the same convergence profile (in terms of the number of SQP iterations and the quality of the solution). D. Ridzal Inexact TR SQP for Large Scale Optimization 33

54 Example 3 Inexactness Control in Tang. Step absolute inner solver stopping tol CG iterations (over all SQP iterations) Fixed Tolerance: Total number of GMRES iterations: 3404 (was 2672). D. Ridzal Inexact TR SQP for Large Scale Optimization 33

55 Example 3 Inexactness Control in Tang. Step More Details Stopping Tolerances for Linear System Solver inx. ctrl 1e-12 1e-11 1e-10 1e-9 1e-8 converges YES YES YES YES NO NO GMRES iter s >10000 >10000 CG iter s >500 >500 SQP iter s >50 >50 No theoretical justification. D. Ridzal Inexact TR SQP for Large Scale Optimization 34

56 Outline Motivation Large Scale Problems in PDE Constrained Optimization Inexactness in linear system solves arising in an SQP algorithm Trust Region SQP Algorithm with Inexact Linear System Solves Existing Work on Inexactness in Optimization Algorithms Review of the SQP Methodology Mechanisms of Inexactness Control Numerical Results Conclusion D. Ridzal Inexact TR SQP for Large Scale Optimization 35

57 Conclusion Integrated iterative linear solvers in a trust-region SQP algorithm. Global convergence of the SQP algorithm is guaranteed through a mechanism of inexpensive and easily implementable stopping conditions for iterative linear system solvers. Eliminated the need to guess fixed solver tolerances, at the expense of a few vector norm computations and a full reorthogonalization in the tangential step computation. extra work < 1% of the cost of linear system solves (for a simple medium scale problem) Numerical results indicate that the dynamic stopping conditions effectively reduce oversolves. Local convergence behavior of the algorithm must be investigated. D. Ridzal Inexact TR SQP for Large Scale Optimization 36

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems Matthias Heinkenschloss 1 and Denis Ridzal 2 1 Department of Computational and Applied

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Analysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996)

Analysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996) Analysis of Inexact rust-region Interior-Point SQP Algorithms Matthias Heinkenschloss Luis N. Vicente R95-18 June 1995 (revised April 1996) Department of Computational and Applied Mathematics MS 134 Rice

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function

Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Michael Ulbrich and Stefan Ulbrich Zentrum Mathematik Technische Universität München München,

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

Numerical Methods for Large-Scale Nonlinear Equations

Numerical Methods for Large-Scale Nonlinear Equations Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

1. Introduction. Consider nonlinear equality-constrained optimization problems of the form. f(x), s.t. c(x) = 0, (1.1)

1. Introduction. Consider nonlinear equality-constrained optimization problems of the form. f(x), s.t. c(x) = 0, (1.1) A FLEXIBLE ITERATIVE SOLVER FOR NONCONVEX, EQUALITY-CONSTRAINED QUADRATIC SUBPROBLEMS JASON E. HICKEN AND ALP DENER Abstract. We present an iterative primal-dual solver for nonconvex equality-constrained

More information

PARALLEL LAGRANGE-NEWTON-KRYLOV-SCHUR METHODS FOR PDE-CONSTRAINED OPTIMIZATION. PART I: THE KRYLOV-SCHUR SOLVER

PARALLEL LAGRANGE-NEWTON-KRYLOV-SCHUR METHODS FOR PDE-CONSTRAINED OPTIMIZATION. PART I: THE KRYLOV-SCHUR SOLVER PARALLEL LAGRANGE-NEWTON-KRYLOV-SCHUR METHODS FOR PDE-CONSTRAINED OPTIMIZATION. PART I: THE KRYLOV-SCHUR SOLVER GEORGE BIROS AND OMAR GHATTAS Abstract. Large scale optimization of systems governed by partial

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

PARALLEL LAGRANGE NEWTON KRYLOV SCHUR METHODS FOR PDE-CONSTRAINED OPTIMIZATION. PART I: THE KRYLOV SCHUR SOLVER

PARALLEL LAGRANGE NEWTON KRYLOV SCHUR METHODS FOR PDE-CONSTRAINED OPTIMIZATION. PART I: THE KRYLOV SCHUR SOLVER SIAM J. SCI. COMPUT. Vol. 27, No. 2, pp. 687 713 c 2005 Society for Industrial and Applied Mathematics PARALLEL LAGRANGE NEWTON KRYLOV SCHUR METHODS FOR PDE-CONSTRAINED OPTIMIZATION. PART I: THE KRYLOV

More information

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia

More information

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

An Inexact Newton Method for Nonconvex Equality Constrained Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract

More information

Poisson Equation in 2D

Poisson Equation in 2D A Parallel Strategy Department of Mathematics and Statistics McMaster University March 31, 2010 Outline Introduction 1 Introduction Motivation Discretization Iterative Methods 2 Additive Schwarz Method

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

Parallelizing large scale time domain electromagnetic inverse problem

Parallelizing large scale time domain electromagnetic inverse problem Parallelizing large scale time domain electromagnetic inverse problems Eldad Haber with: D. Oldenburg & R. Shekhtman + Emory University, Atlanta, GA + The University of British Columbia, Vancouver, BC,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

REPORTS IN INFORMATICS

REPORTS IN INFORMATICS REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN

More information

High Performance Nonlinear Solvers

High Performance Nonlinear Solvers What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm

On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm Int. Journal of Math. Analysis, Vol. 6, 2012, no. 28, 1367-1382 On Stability of Fuzzy Multi-Objective Constrained Optimization Problem Using a Trust-Region Algorithm Bothina El-Sobky Department of Mathematics,

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft

More information

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

7.4 The Saddle Point Stokes Problem

7.4 The Saddle Point Stokes Problem 346 CHAPTER 7. APPLIED FOURIER ANALYSIS 7.4 The Saddle Point Stokes Problem So far the matrix C has been diagonal no trouble to invert. This section jumps to a fluid flow problem that is still linear (simpler

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Key words. minimization, nonlinear optimization, large-scale optimization, constrained optimization, trust region methods, quasi-newton methods

Key words. minimization, nonlinear optimization, large-scale optimization, constrained optimization, trust region methods, quasi-newton methods SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 3, pp. 682 706, August 1998 004 ON THE IMPLEMENTATION OF AN ALGORITHM FOR LARGE-SCALE EQUALITY CONSTRAINED OPTIMIZATION

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED HYDROLOGIC MODELS

ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED HYDROLOGIC MODELS XIX International Conference on Water Resources CMWR 2012 University of Illinois at Urbana-Champaign June 17-22,2012 ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

A recursive model-based trust-region method for derivative-free bound-constrained optimization.

A recursive model-based trust-region method for derivative-free bound-constrained optimization. A recursive model-based trust-region method for derivative-free bound-constrained optimization. ANKE TRÖLTZSCH [CERFACS, TOULOUSE, FRANCE] JOINT WORK WITH: SERGE GRATTON [ENSEEIHT, TOULOUSE, FRANCE] PHILIPPE

More information

Newton s Method and Efficient, Robust Variants

Newton s Method and Efficient, Robust Variants Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science

More information

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Simultaneous estimation of wavefields & medium parameters

Simultaneous estimation of wavefields & medium parameters Simultaneous estimation of wavefields & medium parameters reduced-space versus full-space waveform inversion Bas Peters, Felix J. Herrmann Workshop W- 2, The limit of FWI in subsurface parameter recovery.

More information

2 CAI, KEYES AND MARCINKOWSKI proportional to the relative nonlinearity of the function; i.e., as the relative nonlinearity increases the domain of co

2 CAI, KEYES AND MARCINKOWSKI proportional to the relative nonlinearity of the function; i.e., as the relative nonlinearity increases the domain of co INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS Int. J. Numer. Meth. Fluids 2002; 00:1 6 [Version: 2000/07/27 v1.0] Nonlinear Additive Schwarz Preconditioners and Application in Computational Fluid

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices DICEA DEPARTMENT OF CIVIL, ENVIRONMENTAL AND ARCHITECTURAL ENGINEERING PhD SCHOOL CIVIL AND ENVIRONMENTAL ENGINEERING SCIENCES XXX CYCLE A robust multilevel approximate inverse preconditioner for symmetric

More information

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems P.-O. Persson and J. Peraire Massachusetts Institute of Technology 2006 AIAA Aerospace Sciences Meeting, Reno, Nevada January 9,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

The TAO Linearly-Constrained Augmented Lagrangian Method for PDE-Constrained Optimization 1

The TAO Linearly-Constrained Augmented Lagrangian Method for PDE-Constrained Optimization 1 ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 The TAO Linearly-Constrained Augmented Lagrangian Method for PDE-Constrained Optimization 1 Evan Gawlik, Todd Munson, Jason Sarich,

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

MODIFYING SQP FOR DEGENERATE PROBLEMS

MODIFYING SQP FOR DEGENERATE PROBLEMS PREPRINT ANL/MCS-P699-1097, OCTOBER, 1997, (REVISED JUNE, 2000; MARCH, 2002), MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY MODIFYING SQP FOR DEGENERATE PROBLEMS STEPHEN J. WRIGHT

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information