A note on the iterative solution of weakly nonlinear elliptic control problems with control and state constraints

Size: px
Start display at page:

Download "A note on the iterative solution of weakly nonlinear elliptic control problems with control and state constraints"

Transcription

1 Quaderni del Dipartimento di Matematica, Università di Modena e Reggio Emilia, n. 79, October 7. A note on the iterative solution of weakly nonlinear elliptic control problems with control and state constraints Emanuele Galligani Dipartimento di Matematica Pura e Applicata G.Vitali Università degli Studi di Modena e Reggio Emilia Via Campi 3/b, 4, Modena, Italy galligani@unimo.it Technical Report n. 79, October 7 Dipartimento di Matematica, Università di Modena e Reggio Emilia Abstract. This work concerns with the solution of optimal control problems by means of nonlinear programming methods. The control problem is transcribed into a finite dimensional nonlinear programming problem by finite difference approximation. An iterative procedure for the solution of this nonlinear program is presented. An extensive numerical analysis of the behaviour of the method is reported on boundary control and distributed control problems with boundary conditions of Dirichlet or Neumann or mixed type. Reaction-diffusion problems When the physical system under consideration involves a reaction process accompanied by diffusion, the physical principle of conservation leads to a set of partial differential equations for the unknown quantities of the system. These quantities may be mass concentration in chemical reaction processes, temperature in heat conduction, neutron flux in nuclear reactors, population density in population dynamics, and many others. In certain problems of physical and engineering sciences, a combination of these quantities is involved in a set of equations. The reaction diffusion equation with convection has the form ϕ t div(σgradϕ) + v gradϕ + αϕ + g(x, t, ϕ) = () where ϕ = ϕ(x, t) is the density function at time t and position x in a diffusion medium R, σ = σ(x, t) is the diffusion coefficient, α = α(x, t) is the absorption term, v and is the velocity vector and g(x, t, ϕ(x, t)) is the rate of change due to a reaction. When the physical system involves two or more density functions ϕ, ϕ,..., ϕ r, the reaction diffusion processes is described by a set of equations of the form ϕ i t div(σ igradϕ i ) + v i gradϕ i + α i ϕ i + g i (x, t, ϕ, ϕ,..., ϕ r ) = i =,..., r () and σ i (x, t), α i (x, t) and v i are the terms referred to ϕ i (x, t). This research was supported by the Italian Ministry of Education and Research (MIUR) project: PRIN 6 n This work was presented at nd Biennial Conference on Numerical Analysis NA7, Dundee, June 6 9, 7.

2 The multigroup diffusion approximation to the Boltzmann s transport equation leads to the coupled system () with r g i (x, t, ϕ,..., ϕ r ) = a ij (x, t)ϕ j (x, t) s i (x, t, ϕ,..., ϕ r ) (3) j= where a ii and a ij for i j are piecewise continuous functions on R and s i are source terms (i =,..., r). When the reaction diffusion process reaches a steady state, the density function ϕ(x, t) and the coefficients are independent of time t. This implies that ϕ t = and therefore the governing equation with the effect of convection becomes div(σgradϕ) + v gradϕ + αϕ + g(x, ϕ) = (4) In the case of r density functions, the corresponding equations become (i =,..., r) div(σ i gradϕ i ) + v i gradϕ i + α i ϕ i + g i (x, ϕ, ϕ,..., ϕ r ) = (5) In a variety of interesting applications to Ecology and Nuclear Physics linear and nonlinear eigenvalue problems arise. For example, in the computation of the steady spatial distribution of a population of diffusing consumers (predators) in a special resource consumer (prey predator) ecological system we must consider a diffusion model of the type ([]) div(σgradϕ) + αϕ + g(x, ϕ) = θϕ λ > (6) λ with σ = σ(x) >, θ = θ(x) >, α = α(x) piecewise continuous functions on R; g(ϕ) = γ(ϕ)ϕ and the trophic function γ(ϕ) is a continuously differentiable positive and monotone nondecreasing function of ϕ for all ϕ. The constant λ (eigenvalue) is positive. On the boundary surface Γ of R we have the homogeneous condition ϕ(x) =, x Γ. When the diffusion medium R is a bounded domain the reaction diffusion convection equations are supplemented by suitable boundary conditions on the boundary surface Γ. The appropriate condition on the boundary depends on the physical mechanism surrounding the diffusion medium. Besides the three basic types of boundary conditions (Dirichlet, or Neumann, or Robin boundary conditions), in certain cases it is necessary to impose a nonlinear boundary condition. For time dependent problems there is also an initial condition at time t =. Typical expressions of g(x, t, ϕ) are. Enzyme substrate reaction model ([9], [4]): g(ϕ) = aϕ ( + bϕ) a, b > (7) We remark that, g a (ϕ) = ( + bϕ) ; b g (ϕ) = ( + bϕ) 3 Here, g > and g > for ϕ >.. Chemical reaction model ([]): g(ϕ) = a(c ϕ)e ( b/(+ϕ)) a, b, c > (8) (e.g. a 5. 4, b = 5, c =.4).

3 We remark that, g (ϕ) = Here, g >, g > and g > for ϕ > c. b (+ϕ) g (ϕ) = ae ( + ϕ) (ϕ + (b + )ϕ + ( bc)) a b ( + ϕ) 4 e (+ϕ) (b (ϕ c) + b( + c)( + ϕ)) 3. Ginzburg Landau oscillating BZ reaction model ([3], [3], []) 4. Spatially distributed communities model ([]): or g(ϕ) = aϕ( ϕ ) a > (9) g(ϕ) = aϕ (b + ϕ) a >, b > () g(ϕ) = aϕlog( + ϕ) a > () (e.g. σ =.5, α =.7, v = and R = [, ] [, ], with a =., b = 3. for () and a = 5. 3 for () for the equation (4) subject to Dirichlet boundary conditions and σ =.5, α =.85, θ =., λ =.5 and R = [, ] [, ], with a =., b = 3. for () and a =.5 for () for the eigenvalue problem (6) with homogeneous Dirichlet boundary conditions). We remark that, for () we have and for () we have g (ϕ) = aϕ + abϕ (b + ϕ) ; g (ϕ) = ab (b + ϕ) 3 g (ϕ) = a log( + ϕ) + then, in both the cases, g >, g > and g > for ϕ. 5. Fischer s population growth model ([4], [45]): or We remark that, for (3) Here, g and g for ϕ. 6. Budworm population dynamics model ([4]): aϕ ( + ϕ) ; a(ϕ + ) g (ϕ) = ( + ϕ) g(ϕ) = aϕ(b cϕ) a, b, c > () g(ϕ) = aϕ(ϕ θ)( ϕ) a >, < θ < (3) g (ϕ) = a(ϕ θ)( ϕ) aϕ(( + θ) ϕ) ϕ g(ϕ) = + ϕ + rϕ( ϕ ) r, q >, (4) q 3

4 7. Radiation model: g(ϕ) = βe αϕ β R, α > (5) In thermal ignition and combustion problems we have β < and α >, and then, in that case, g (ϕ) = βαe αϕ < See [45] and [46] for an analysis of the existence of the solution of thermal ignition and combustion problems described with ϕ ϕ + g(ϕ) = t In Bratu eigenvalue problem (e.g. [9], [39]) we have g(ϕ) = λe ϕ in ϕ + α ϕ x + g(ϕ) = The solution exists for λ [8] and in the unit square domain the branch of solutions has a limit point at λ = λ c = and u = u c with u c (.5,.5) = Molecular interaction model ([47]): 9. A nonlinear oscillator ([49]) in an eigenvalue problem g(ϕ) = ϕ (6) g(ϕ) = λsin ϕ (7) ϕ + g(ϕ) = In [] a nonlinearity in several variables is g i (ϕ) = ( ϕ i + sin ϕ i cosϕ i ) ı =,..., r (with ϕ =.) Another interesting case of this type is the complex Bratu problem ([39]). Taking ϕ = ϕ + iϕ, the complex Bratu problem is equivalent to the following system { ϕ = λe ϕ cosϕ ϕ = λe ϕ sin ϕ In [7], the authors show that the critical point {λ c, u c } of the real valued Bratu problem is a bifurcation point for the complex value problem if we allow complex valued solutions. There is a huge amount of literature concerning the numerical solution of these initial and boundary value problems. When R is a two three dimensional domain, these problems belong to the class of very large scientific computing problems for which the use of computers with vector parallel architecture is appropriate. Much of the work which has been done on the use of parallel and vector computers for solving with the finite difference or finite element discretization methods the partial differential equations () (6) has treated the theme of decomposition of a problem into independent portions and reordering of the unknowns of the discretized equations in order to enhance such a decomposition. Special attention have received the Multigrid and Domain Decomposition methods (see e.g. [6, Chap. ] and [5]). Also the Operator Splitting and Alternating Direction methods have been extensively considered ([35]). In the large class of the Operator Splitting methods we have developed and analyzed the Additive Operator Spitting (AOS) methods. These methods are characterized by having a high degree of decomposition. Generally, this property increases the degree of multiprogramming (DOM), i.e., the number of active processes of a multiprocessor system. (DOM is a measure of the performance of a multiprocessor system). This makes the Additive Operator Splitting methods ideally suited for implementation on parallel computers. 4

5 We discretize, with respect to the spatial variable, the partial differential equations () (6) with the box integration method ([5, Chap. 6]). This provides to apply the theory of monotone operators for the study of stability of the approximate solution. The large sets of nonlinear algebraic equations generated by the discretization of the equations () (6) are solved by using the Newton s method. This method is one of the most used and powerful tool for solving nonlinear equations in several variables. Newton s method is attractive for its local convergence properties even if it may fail to converge because not properly initialized: initialization may be therefore a complicated question when one applies this method to difference schemes generated by the discretization of partial differential equations, especially of elliptic type as (4) or (6). In neighbourhoods of exact solution Newton s method generates quadratically convergent approximations. To maintain a superlinear convergence and to enlarge the attraction basin of the exact solution, one may use various Newton like methods. In order to solve efficiently and accurately large systems of nonlinear difference equations generated by the discretization with the Finite Difference method of the equation (4) or (6), we have developed and analyzed a Modified Newton Iterative method and a Simplified Newton Iterative method (a two stage iterative method) which are very well suited for implementation on vector parallel computers ([], [4], [9], []). In the recent past there has been given, also, notable consideration to distributed and boundary optimal control problems for the time dependent and steady state reaction diffusion equations. Important control problems for the stationary equation (4), subject to control and state inequality constraints, with a quadratic cost functional have been investigated in [36], [37], [38], [3], [3], [3], [5], []. By introducing a suitable finite difference discretization scheme developed in [38], [36], [37], these elliptic control problems are transcribed into a large finite dimensional nonlinear constrained optimization problem with a sparse structure (see [3, sections,3,5]). In particular, for the nonlinear elliptic equation with g(ϕ) given by 3., 5. and 7. elliptic boundary control problems with Neumann boundary condition and elliptic distributed control problems with Neumann and Dirichlet boundary conditions with control and state constraints of simple type have been considered; even bang bang controls have been included. Newton like approximation approach The transcription of control problems governed by equation (4) leads to a nonlinear mathematical programming problem of the form minimize f(z) (8) subject to { g (z) = g (z) where z R n, f : R n R, g R neq and g R m (neq < n). We assume that f(z), g (z), g (z) are twice continuously differentiable and the first and second derivatives of the objective function and constraints are available. The Karush Kuhn Tucker (KKT) optimality conditions for problem (8) (9), written in slack variable form, are E = f(z) g (z)λ g (z)λ = (9) E = g (z) = E 3 = g (z) + s = () with E 4 = Λ Se m = s ; λ 5

6 where s, λ R m, Λ = diag(λ ), S = diag(s). The vector e m indicates the vector of m components whose values are equal to. The system () can be written as H(v) = s ; λ where v = z λ λ s H(v) = E E E 3 E 4 The Jacobian matrix of H is the matrix of order n + neq + m Q B C H (v) = B T C T I S Λ H (v) () where Q is the Hessian matrix of the Lagrangian function of the problem (), neq Q L(z) = f(z) λ,i g,i (z) m λ,i g,i (z) and B = g (z) and C = g (z). Here, f(z), g,i (z), g,i (z) are the Hessian matrices of the function f(z) and of the i th component of the constraints g (z) and g (z), respectively; then, λ,i and λ,i are the i th component of λ and λ, respectively. Under the classic assumptions on problem (8) (9) of existence, smoothness, regularity, second order sufficiency, strict complementarity (see i.e. [34, Chap. ]), the Jacobian matrix H (z, λ, λ, s ) of H(v) in () is nonsingular: here (z, λ, λ ) is the solution to problem (8) (9) and s = g (z ). This result motivates the use of the Newton method for solving the nonlinear system () with an initial guess v (), satisfying λ () >, s () > in a neighbourhood of v = (z, λ, λ, s ). In order to avoid that the iterates of Newton s method, when they reach the boundary of the nonnegative orthant (s, λ ), are forced to stick to the boundary, we replace the complementarity equation E 4 = in () with a perturbed complementarity equation Λ Se m = ρe m, ρ >. Thus, at each stage k (k =,,,...) of Newton s method we must solve the perturbed Newton equation H (v (k) ) v + H(v (k) ) = ρ k ẽ () where ẽ = ( T n+neq+m, et m )T and the parameter ρ k goes to when k diverges. Then, we must update the current iterate by v (k+) = v (k) + α k v (k) (3) where v (k) is the solution of the linear system (). This method determines a sequence {v (k) } that strictly satisfies the constraints in the perturbed KKT system H(v) = ρ k ẽ (4) s > ; λ > and the KKT optimality conditions () only in the limit. Since the iterates v (k) reach the solution from the interior of the nonnegative orthant (feasible region), such a method is also referred as Interior Point method [53]. The crucial points in the analysis of this iterative procedure are the choices of the parameters ρ k and α k and the solution of the linear system (). 6

7 We define ρ k = σ k µ k, where < σ min σ k σ max < and µ k satisfies the condition µ k ẽ H(v (k) ) (5) ( denotes the norm). Thus, equation () may be interpreted as the k th iteration of the Inexact Newton method for solving the KKT system () with the forcing term σ k (, ) [3]. Therefore, the vector σ k µ k ẽ has the meaning of a residual. In the Inexact Newton method with forcing term σ k we have We can prove ([4]) that for values of µ k that satisfy H (v (k) ) v (k) + H(v (k) ) σ k H(v (k) ) (6) µ () k s(k)t (k) λ m µ k H(v(k) ) m µ () k (7) the vector v (k) that satisfies (6) is a descent direction at v (k) for the merit function Φ(v) = H(v), i.e. Φ(v (k) ) T v (k). Computation of the exact solution v (k) of the equation () can be or may not be justified when v (k) is relatively far from the solution of (). Therefore, one might prefer to compute some approximate solution of () by performing some iterations of an inner linear iterative solver for the equation () with an adaptive stopping criterion of the form where r (k) δ k H(v (k) ) (8) r (k) = H (v (k) ) v (k) + H(v (k) ) σ k µ k ẽ (9) We indicate the approximate solution of () again with v (k). The choice δ k = implies that the equation () must be solved exactly. Solving only approximately the equation () causes a reduction of the forcing term in the Inexact Newton method. The damping parameter α k in (3) must be selected in such a way to guarantee that the merit function Φ(v) can be reduced at each iteration and the components of the vectors s (k) and λ (k) remain positive for all k =,,,... In order to generate a sequence of iterates v (k) with (s (k), λ (k) ) > that converges to a solution v of the system () under standard assumptions on KKT systems, it is necessary to define a path to be followed by the iterates v (k) which forces these from coming too close to the boundary of the nonnegative orthant (s, λ ). For the generation of this trajectory, we use the following two functions of α defined in [7] (see also [4, p. 4]), k =,,,...: ( ( ) ϕ (k) (α) min S (k) (α)λ (k) (α)e s (k) (α) T ) λ (k) (α) m γ k τ i=,m m ψ (k) (α) s (k) (α) T λ (k) (α) γ k τ H (v (k) (α)) where τ ( ) min i=,m S () Λ () e m T ( s() λ () m ) ; τ s()t λ () H (v () ) ; γ k [, ) 7

8 and v (k) (α) = z (k) (α) λ (k) (α) λ (k) (α) s (k) (α) = z (k) λ (k) λ (k) s (k) + α z (k) λ (k) λ (k) s (k) = v(k) + α v (k) with s () > and λ () >. Thus τ > and τ >. Clearly v (k) = v (k) () and v (k+) = v (k) (α k ). The vector v (k) is the approximate solution of () with residual (8) (9). We can prove [] that there exist two positive numbers α k () and α k () in (, ] such that ϕ (k) (α) α (, α k () ] when ϕ (k) () and ψ (k) (). If we define and we have η k < and ψ (k) (α) α (, α k () ] α k = min{α () k, α() k, } (, ] η k = α k ( (σ k + δ k )) with (σ k + δ k ) σ max + δ max < H (v (k) ) α k v (k) + H(v (k) ) η k H(v (k) ), (3) That is, the vector α k v (k) satisfies the condition on the residual of the Inexact Newton method with forcing term η k. We select the step length α k in (3) by performing a reduction of α k by using the Inexact Newton Backtracking algorithm described in [6] until an acceptable α k = θ t α k is found, where t is the smallest nonnegative integer such that α k satisfies with θ, β (, ). H(v (k) + α k v (k) ) ( βα k ( (σ k + δ k ))) H(v (k) ) (3) Since ( βα k ( (σ k + δ k ))) <, this inequality asserts that H(v (k+) ) ξ k H(v (k) ) < ξ k ξ < (3) Inequalities (3) and (3) are the two basic conditions for the convergence of Inexact Newton method [48, Sect. 6.4]. Besides, if H(v (k) =, for all k =,,,..., we can prove [] that the inner products s (k)t λ (k) are bounded above and bounded away from zero and all components of S (k) Λ (k) e m are bounded above and bounded away from zero. Specifically, (s (k), λ (k) ) > for all k =,,,...; the sequence {Φ(v (k) )} is monotone and nonincreasing; therefore, Φ(v (k) ) Φ(v () ) for all k =,,... In [] and [5] the convergence properties of the method () (3) are stated under standard assumptions on the Karush Kuhn Tucker systems considered in [7] and [5]. By omitting the iteration index k, the linear perturbed Newton equation () can be written as Q z + B λ + C λ = E B T z = E C T z + s = E 3 S λ + Λ s = E 4 + ρe m 8

9 From the last equation we can deduce s = Λ [ S λ E 4 + ρe m ] and then the system () can be rewritten in the reduced form Q B C z B T λ A E C T Λ S λ g (x) ρλ em The coefficient matrix of this system is symmetric and indefinite. A (33) System () is not necessarily an ill conditioned system of equations. In actual computations more care must be taken during the reduction of the system () to avoid cancellation in the reduction process (33) due to very small elements in Λ S. By a further substitution, from the third block equation in (33), we obtain λ = S [Λ C T z + Λ E 3 E 4 + ρe m ] Thus, the system () can be written in the condensed form ( ) ( ) ( ) A B z c B T = λ E (34) with A = Q + CS Λ C T c = E CS [Λ E 3 E 4 + ρe m ] In actual computations, as the variables s i and λ,i approach the bound, the elements of S Λ may become very large. In this case it is necessary to introduce a preconditioning strategy for improving greatly the performance of the method. It is well known that, if the symmetric matrix Q is positive definite on the null space of B T and B T is a full row rank matrix, the condensed system (34) may be solved with the Hestenes method of multipliers ([34, p. 46], [5]). The standard assumptions on the Karusk Kuhn Tucker systems considered in [7] and [5] assure that this property on Q and B T holds. However, numerical experience on optimal control of physical systems described by elliptic reaction diffusion equations with box constraints of simple type on the control and the state ([6], [8]), has shown that it is more efficient to solve system (34) with the preconditioned conjugate gradient method with an indefinite preconditioner ([33]) together with a sparse Choleski like factorization on the preconditioner matrix. The Fortran 9 package that implements the sparse Choleski like factorization of sparse matrices, called BLKFCLT and introduced in [7], is downloadable from the website dm.unife.it/blkfclt. 3 Numerical examples 3. Elliptic control problems In the following we state the elliptic control problems with control and state constraints and we consider R R a bounded domain with piecewise smooth boundary Γ; the point x = (x, x ) T belongs to R = R Γ. We consider the elliptic boundary control problem with Neumann boundary condition (EBC) and elliptic distributed control problems with Neumann and Dirichlet boundary conditions (EDC I, EDC II, EDC III). 9

10 EBC: Determine a boundary control function u L (Γ) which minimizes the cost functional of tracking type (γ ) F(y, u) = y d (x)) R(y(x) dx + γ (u(x) u d (x)) dx Γ with given functions y d C( R) and u d L (Γ), subject to the state equations on R : y(x) y(x) + y(x) 3 = on Γ : ν y(x) = u(x) and the inequality constraints on control and state u l (x) u(x) u r (x) on Γ y(x) y r (x) on R EDC I: Determine a distributed control function u L (R) which minimizes the cost functional of tracking type (γ ) F(y, u) = y d (x)) R(y(x) dx + γ (u(x) u d (x)) dx Γ with given functions y d C( R) and u d L (R), subject to the state equations on R : y(x) y(x) + y(x) 3 = u(x) on Γ : y(x) = and the inequality constraints on control and state u l (x) u(x) u r (x) on R y(x) y r (x) on R EDC II: Determine a distributed control function u L (R) which minimizes the cost functional of tracking type (γ ) F(y, u) = y d (x)) R(y(x) dx + γ (u(x) u d (x)) dx Γ with given functions y d C( R) and u d L (R), subject to the state equations on R : y(x) e y(x) = u(x) and the inequality constraints on control and state on Γ : y(x) = or ν y(x) + y(x) = u l (x) u(x) u r (x) on R y(x) y r (x) on R EDC III: Determine a distributed control function u L (R) which minimizes the cost functional F(y, u) = (Mu(x) Ku(x)y(x))dx with the nonnegative constants M and K, subject to the state equations R on R : y(x) y(x)(a(x) u(x) by(x)) = on Γ : ν y(x) = and the inequality constraints on control and state u l (x) u(x) u r (x) on R y(x) y r (x) on R In these problems, ν y(x) denotes the differentiation along the normal ν to Γ directed away from R. An optimal solution of each of these problems is denoted by u and y. We assume that an optimal solution z = (y, u ) T exists.

11 Optimal State Optimal Control Figure : problem EBC; γ =. 3. Description of the numerical experiments The Newton like approximation approach described in Section and in the tables below referred as INM PCG (Inexact Newton Method Preconditioned Conjugate Gradient) for the solution of the optimization problem (8) (9), has been implemented in a Fortran 9 code on a workstation HPzx6 with Intel Itanium processor.3ghz with Gb of RAM. The code has been compiled with a +3 optimization option of the Fortran HP compiler. The figures 8 show the computed state and the control variables with different values of the parameters for the control problems with the number N of grid points for each axis x, x equal to 99. In all the problems the domain R = R Γ is the unit square in the Cartesian x x plane. We listed below the values of the parameters: EBC (figures and ): u l (x) =.8, u r (x) =.5, y r (x) =.7, y d (x) = (x (x )+x (x )), u d (x) = ; γ =. (for Figure ); γ = (for Figure ); EDC I (figures 3 and 4): u l (x) =.5, u r (x) = 4.5, y r (x) =.85, y d (x) = + (x (x ) + x (x )), u d (x) = ; γ =. (for Figure 3); γ = (for Figure 4); EDC II (figures 5 and 6): y d (x) = sin(πx )sin(πx ), u d (x) = ; for Dirichlet condition (Figure 5): u l (x) = 5, u r (x) = 5, y r (x) =., γ =.; for Neumann condition (Figure 6): u l (x) = 8, u r (x) = 9, y r (x) =.37, γ = ; EDC III (figures 7 and 8): u l (x) =.7, u r (x) =, y r (x) = 7., a(x) = sin(πx x ), b = ; M = and K =.8 (for Figure 7); M = and K = (for Figure 8);

12 Optimal State Optimal Control Figure : problem EBC; γ = Optimal State Optimal Control Figure 3: problem EDC I; γ =.

13 Optimal State Optimal Control Figure 4: problem EDC I; γ = Optimal State Optimal Control Figure 5: problem EDC II, Dirichlet condition; γ =. Figures 9 and show the structures of the Jacobian matrix of the equality constraints and the Hessian matrix of the Lagrangian function when N = 5. Furthermore, in the tables 4 (tables 9 of [6]), we report a comparison of INM PCG method with KNITRO codes version 4... Both direct ([3], [4]) and iterative ([4]) inner solvers for KNITRO are considered. Here, the term e 6 indicates 6 ; opttol is a tolerance parameter used by the KNITRO codes and the INM PCG method stops when the Euclidean norm of H(v (k) ) is less than 8 ; the choose of the values for the starting points is described in [6]. In the experiments, INM PCG method uses the same input AMPL models ([8]) of the discretized PDE problems as KNITRO. References [] Aris R.: The Mathematical Theory of Diffusion and Reaction in Permeable Catalysts, vol. I, II, Clarendon Press, Oxford,

14 Optimal State Optimal Control Figure 6: problem EDC II, Neumann condition; γ = Optimal State Optimal Control Figure 7: problem EDC III; M =, K = Optimal State Optimal Control Figure 8: problem EDC III; M =, K = 4

15 nz = 85 structure of ( g (z)) T nz = 45 structure of Q Figure 9: problem EBC nz = 3 structure of ( g (z)) T nz = structure of Q Figure : problem EDC III 5

16 N Solver opttol f(x (it) ) H(v (it) ) time it (inn) 99 INM PCG e (4) KNITRO D e e e e e e- 7.7 KNITRO I e e-7 7. (38) e e (73) e e (47) 99 INM PCG e (34) KNITRO D e e e e e e KNITRO I e e (85) e e-9.5 (3) e e (536) 99 INM PCG e (36) KNITRO D e e e e e e KNITRO I e e () e e (7) e e (488) 399 INM PCG e (34) KNITRO D e-6 m m m m e-8 m m m m e-9 m m m m KNITRO I e e (7) e e (7) e e (59) Table : Comparison of Inexact Newton Method + Preconditioned Conjugate Gradient with KNITRO v4.. on test problem EBC (γ =.) 6

17 N Solver opttol f(x (it) ) H(v (it) ) time it (inn) 99 INM PCG e- 8.e () KNITRO D e e- 4.e e e- 4.e e e- 9.e KNITRO I e e-.9e () e e-.5e (3) e e- 4.e (66) 99 INM PCG e- 8e (4) KNITRO D e e-.3e e e- 3.5e e e-.3e KNITRO I e e (8) e e- 3.6e (5) e e- 7.8e (47) 99 INM PCG e- 9.3e-9. 6(5) KNITRO D e e-.e e e-.8e e e- 6.e KNITRO I e e- 5.e (8) e e-.e (7) e e- 3.e- 47. (46) 399 INM PCG e- 8.8e (7) KNITRO D e-6 m m m m e-8 m m m m e-9 m m m m KNITRO I e-6 m m m m e-8 m m m m e-9 m m m m Table : Comparison of Inexact Newton Method + Preconditioned Conjugate Gradient with KNITRO v4.. on test problem EDC I (γ =.) 7

18 N Solver opttol f(x (it) ) H(v (it) ) time it (inn) 99 INM PCG e-8. 9(66) KNITRO D e e e e e e KNITRO I e e (3) e e- 4. (494) e e- 4. (494) 99 INM PCG e (6) KNITRO D e e e e e e KNITRO I e e (7) e e (98) e e (8) a 99 INM PCG e (55) KNITRO D e-6 m m m m e-8 m m m m e-9 m m m m KNITRO I e e (6) e e (38) e-9 * * * * Table 3: Comparison of Inexact Newton Method + Preconditioned Conjugate Gradient with KNITRO v4.. on test problem EDC III (M =, K =.8) a Current solution estimate cannot be improved by the code. [] Bai Z.Z.: A class of two stage iterative methods for systems of weakly nonlinear equations, Numer. Algorithms 4 (997), [3] Byrd R.H., Gilbert J.C., Nocedal J.; A trust region method based on interior point techniques for nonlinear programming, Math. Program. 89 (), [4] Byrd R.H., Hribar M.E., Nocedal J.; An interior point algorithm for large scale nonlinear programming, SIAM J. Optim. 9 (999), [5] Bonettini S., Galligani E., Ruggiero V.: An inexact Newton method combined with Hestenes multipliers scheme for the solution of Karush Kuhn Tucker systems, Appl. Math. Comput. 8 (5), [6] Bonettini S., Galligani E., Ruggiero V.: Inner solvers for interior point methods for large scale nonlinear programming, Comput. Optim. Appl. 37 (7), 34. [7] Bonettini S., Ruggiero V.; Some iterative methods for the solution of a symmetric indefinite KKT system, Comput. Optim. Appl. 38 (7), 3 5. [8] Bonettini S., Ruggiero V., Tinti F.: On the solution of indefinite systems arising in nonlinear programming problems, Numer. Linear Algebra 7, to appear. [9] Brown P.N., Saad Y.: Hybrid Krylov methods for nonlinear systems of equations, SIAM J. Sci. Stat. Comput. (99), [] Buffoni G., Griffa A., Li Z., de Mottoni P.: Spatially distributed communities: the resource consumer system, J. Math. Biol. 33 (995), [] Cañada A., Gámes J.L., Montero J.A.; Study of an optimal control problem for diffusive nonlinear elliptic equations of logistic type, SIAM J. Control Optim. 36 (998),

19 N Solver opttol f(x (it) ) H(v (it) ) time it (inn) 99 INM PCG e (54) KNITRO D e e e e e e KNITRO I e e-7.5 9(33) e e (4) e e (35) 99 INM PCG e (73) KNITRO-D e e e e e e KNITRO I e e (8) e e (37) e e (339) 99 INM PCG e (93) KNITRO D e e e e e e KNITRO I e e (53) e e (8) e-9 * * * * 399 INM PCG e (3) KNITRO D e-6 m m m m e-8 m m m m e-9 m m m m KNITRO I e-6 m m m m e-8 m m m m e-9 m m m m Table 4: Comparison of Inexact Newton Method + Preconditioned Conjugate Gradient with KNITRO v4.. on test problem EDC III (M =, K = ) 9

20 [] Chawla M.M., Al Zanaidi M.A.: An extended trapezoidal formula for the diffusion equation in two space dimensions, Computers Math. Applic. 4 (), [3] Dembo R.S., Eisenstat S.C., Steihaug T.; Inexact Newton methods, SIAM J. Numer. Anal. 9 (98), [4] Durazzi C.; On the Newton interior point method for nonlinear programming problems, J. Optim. Theory Appl. 4 (), [5] Durazzi C., Ruggiero V.; Global convergence of the Newton interior point method for nonlinear programming, J. Optim. Theory Appl. (4), [6] Eisenstat S.C., Walker H.F.: Globally convergent inexact Newton methods, SIAM J. Optim. 4 (994), [7] El Bakry A.S., Tapia R.A., Tsuchiya T., Zhang Y.; On the formulation and theory of the Newton interior point method for nonlinear programming, J. Optim. Theory Appl. 89 (996), [8] Fourer R., Gay D.M., Kernighan B.W.: AMPL, A Modelling Language for Mathematical Programming, Second Edition, Brooks/Cole Thomson Learning, Pacific Grove, 3. [9] Galligani E.: A two stage iterative method for solving a weakly nonlinear parametrized system, Intern. J. Computer Math. 79 (), 4. [] Galligani E.: A two stage iterative method for solving weakly nonlinear systems, Atti Sem. Mat. Fis. Univ. Modena L (), [] Galligani E.: The Newton arithmetic mean method for the solution of systems of nonlinear equations, Appl. Math. Comput. 34 (3), [] Galligani E.; Analysis of the convergence of an inexact Newton method for solving Karush Kuhn Tucker systems, Atti Sem. Matem. Fis. Univ. Modena e Reggio Emilia, LII (4), [3] Galligani E.: Iterative solution of elliptic control problems with control and state constraints, Atti Sem. Mat. Fis. Univ. Modena e Reggio Emilia LIII (5), [4] Galligani E.: On solving a special class of weakly nonlinear finite difference systems, Intern. J. Computer Math. 7, to appear. [5] Galligani I., Ruggiero V.: Numerical solution of equality constrained quadratic programming problems on vector parallel computers, Optim. Meth. Software (993), [6] Greenbaum A.: Iterative Methods for Solving Linear Systems, SIAM, Philadelphia, 997. [7] Henderson M.E., Keller H.B.: Complex bifurcation from real paths, SIAM J. Appl. Math. 5 (99), [8] Keller H.B., Cohen D.S.: Some positive problems suggested by nonlinear heat generation, J. Math. Mech. 6 (967), [9] Kernevez J.P.: Enzyme Mathematics, North Holland, Amsterdam, 98. [3] Ito K., Kunisch K.: Augmented Lagrangian SQP methods for nonlinear optimal control problems of tracking type, SIAM J. Optimization 6 (996), [3] Kunisch K., Volkwein S.: Augmented Lagrangian SQP techniques and their approxiamtions, Contemporary Math. 9 (997), [3] Leung A., Stojanovic S.: Optimal control for elliptic Volterra Lotka type equations, J. Math. Anal. Appl. 73 (993),

21 [33] Lukšan L., Vlček J.; Indefinitely preconditioned inexact Newton method for large sparse equality constrained non linear programming problems, Numer. Linear Algebra Appl. 5 (998), [34] Luenberger D.G.; Linear and Nonlinear Programming, Second Edition, Addison Wesley Publ., Reading MA, 984. [35] G.I. Marchuk: Splitting and Alternating Direction Methods, Handbook of Numerical Analysis (P.G. Ciarlet, J.L. Lions eds.), vol. I, North Holland, Amsterdam, 99, [36] Maurer H., Mittelmann H.D.; Optimization techniques for solving elliptic control problems with control and state constraints: Part. Boundary control, Comput. Optim. Appl. 6 (), [37] Maurer H., Mittelmann H.D.; Optimization techniques for solving elliptic control problems with control and state constraints: Part. Distributed control, Comput. Optim. Appl. 8 (), 4 6. [38] Mittelmann H.D., Maurer H.: Solving elliptic control problems with interior point and SQP methods: control and state constraints, J. Comput. Appl. Math. (), [39] Moré J.J.: A collection of nonlinear model problems, in: Computational Solution of Nonlinear Systems of Equations (E.L. Allgower, K. Georg eds.), Lectures in Applied Mathematics, vol. 6, American Mathematical Society, Providence RI, 99, [4] Murray J.D.: Mathematical Biology, vol. I, II, Springer Verlag, Berlin, 3. [4] Nocedal J., Hribar M.E., Gould N.I.M.; On the solution of equality constarined quadratic programming problems arising in optimization, SIAM J. Sci. Comput. 3 (), [4] Nocedal J., Wright S.J.: Numerical Optimization, Springer, New York, 999. [43] Ortega J.M.: Numerical Analysis: A Second Course, Academic Press, New York, 97. [44] Ortega J.M., Rheinboldt W.C.: Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 97. [45] Pao C.V.: Monotone iterative methods for finite difference system of reaction diffusion equations, Numer. Math. 46 (985), [46] Pao C.V.: Nonexistence of global solutions and bifurcation analysis for a boundary value problem of parabolic type, Proc. Am. Math. Soc. 65 (977), [47] Pohozaev S.T.: The Dirichlet problem for the equation u = u, Soviet Math. (96), [48] Rheinboldt W.C.; Methods for Solving Systems of Nonlinear Equations, Second Edition, SIAM, Philadelphia, 998. [49] Sawami H., Niki H.: On iterative methods for solving a semi linear eigenvalue problem, Intern. J. Computer Math. 6 (996), [5] Smith B., Bjorstad P., Gropp W.: Domain Decomposition, Parallel Multilevel Methods for Elliptic Partial Differential Equations, Cambridge University Press, Cambridge, 996. [5] Stojanovic S.; Optimal damping control and nonlinear elliptic systems, SIAM J. Control Optim. 9 (99), [5] Varga R.S.: Matrix Iterative Analysis, Second Edition, Springer, Berlin,. [53] Wright S.J.: Primal Dual Interior Point Methods, SIAM, Philadelphia, 997.

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia

More information

IP-PCG An interior point algorithm for nonlinear constrained optimization

IP-PCG An interior point algorithm for nonlinear constrained optimization IP-PCG An interior point algorithm for nonlinear constrained optimization Silvia Bonettini (bntslv@unife.it), Valeria Ruggiero (rgv@unife.it) Dipartimento di Matematica, Università di Ferrara December

More information

A perturbed damped Newton method for large scale constrained optimization

A perturbed damped Newton method for large scale constrained optimization Quaderni del Dipartimento di Matematica, Università di Modena e Reggio Emilia, n. 46, July 00. A perturbed damped Newton method for large scale constrained optimization Emanuele Galligani Dipartimento

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

Numerical Methods for Large-Scale Nonlinear Equations

Numerical Methods for Large-Scale Nonlinear Equations Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

A nonmonotone semismooth inexact Newton method

A nonmonotone semismooth inexact Newton method A nonmonotone semismooth inexact Newton method SILVIA BONETTINI, FEDERICA TINTI Dipartimento di Matematica, Università di Modena e Reggio Emilia, Italy Abstract In this work we propose a variant of the

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

1.The anisotropic plate model

1.The anisotropic plate model Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 6 OPTIMAL CONTROL OF PIEZOELECTRIC ANISOTROPIC PLATES ISABEL NARRA FIGUEIREDO AND GEORG STADLER ABSTRACT: This paper

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems Matthias Heinkenschloss 1 and Denis Ridzal 2 1 Department of Computational and Applied

More information

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES INCOMPLEE FACORIZAION CONSRAIN PRECONDIIONERS FOR SADDLE-POIN MARICES H. S. DOLLAR AND A. J. WAHEN Abstract. We consider the application of the conjugate gradient method to the solution of large symmetric,

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Hestenes Method for Symmetric Indefinite Systems in Interior Point Method

Hestenes Method for Symmetric Indefinite Systems in Interior Point Method Quaderni del Dipartimento di Matematica, Università di Modena e Reggio Emilia, n., April 23. Hestenes Method for Symmetric Indefinite Systems in Interior Point Method Silvia Bonettini, Emanuele Galligani,

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

QUADERNI. del. Dipartimento di Matematica. Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi

QUADERNI. del. Dipartimento di Matematica. Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi UNIVERSITÀ DI TORINO QUADERNI del Dipartimento di Matematica Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi AN ADAPTATION OF BICGSTAB FOR NONLINEAR BIOLOGICAL SYSTEMS QUADERNO N. 17/2005 Here

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Optimization Techniques for Solving Elliptic Control Problems with Control and State Constraints. Part 2: Distributed Control

Optimization Techniques for Solving Elliptic Control Problems with Control and State Constraints. Part 2: Distributed Control Computational Optimization and Applications, 18, 141 160, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. Optimization Techniques for Solving Elliptic Control Problems with Control

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Newton s Method and Efficient, Robust Variants

Newton s Method and Efficient, Robust Variants Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

THE restructuring of the power industry has lead to

THE restructuring of the power industry has lead to GLOBALLY CONVERGENT OPTIMAL POWER FLOW USING COMPLEMENTARITY FUNCTIONS AND TRUST REGION METHODS Geraldo L. Torres Universidade Federal de Pernambuco Recife, Brazil gltorres@ieee.org Abstract - As power

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

REPORTS IN INFORMATICS

REPORTS IN INFORMATICS REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN

More information

Local Analysis of the Feasible Primal-Dual Interior-Point Method

Local Analysis of the Feasible Primal-Dual Interior-Point Method Local Analysis of the Feasible Primal-Dual Interior-Point Method R. Silva J. Soares L. N. Vicente Abstract In this paper we analyze the rate of local convergence of the Newton primal-dual interiorpoint

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

DYNAMICS IN 3-SPECIES PREDATOR-PREY MODELS WITH TIME DELAYS. Wei Feng

DYNAMICS IN 3-SPECIES PREDATOR-PREY MODELS WITH TIME DELAYS. Wei Feng DISCRETE AND CONTINUOUS Website: www.aimsciences.org DYNAMICAL SYSTEMS SUPPLEMENT 7 pp. 36 37 DYNAMICS IN 3-SPECIES PREDATOR-PREY MODELS WITH TIME DELAYS Wei Feng Mathematics and Statistics Department

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED HYDROLOGIC MODELS

ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED HYDROLOGIC MODELS XIX International Conference on Water Resources CMWR 2012 University of Illinois at Urbana-Champaign June 17-22,2012 ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Overlapping Schwarz preconditioners for Fekete spectral elements

Overlapping Schwarz preconditioners for Fekete spectral elements Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Numerical Methods for Partial Differential Equations Finite Difference Methods

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Structured Preconditioners for Saddle Point Problems

Structured Preconditioners for Saddle Point Problems Structured Preconditioners for Saddle Point Problems V. Simoncini Dipartimento di Matematica Università di Bologna valeria@dm.unibo.it p. 1 Collaborators on this project Mario Arioli, RAL, UK Michele Benzi,

More information