Optimal control problems with PDE constraints

Size: px
Start display at page:

Download "Optimal control problems with PDE constraints"

Transcription

1 Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017

2 General framework

3 Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function. Definition: A point q is a global minimizer of f if f (q ) f (q) for all q R n. Definition: A point q is a (strict) local minimizer of f if there exist a neighbourhood E of q, such that f (q ) f (q) for all q E. <

4 Taylor s theorem: Let f : R n R be continuously differentiable and p R n. Then, f (q + p) = f (q) + f (q + εp) T p for ε (0, 1). If f is twice continuously differentiable, then and 1 f (q + p) = f (q) f (q + εp)p 2 f (q + εp) T pf (q + p) = f (q) + f (q) T p pt 2 f (q + εp)p.

5 Necessary and sufficient conditions for optimality Theorem:[First order necessary condition] If q is a local minimiser and f is continuously differentiable in an open neighbourhood of q, then f (q ) = 0. Theorem:[Second order necessary condition] If q is a local minimiser and 2 f is continuous in an open neighbourhood of q, then f (q ) = 0 and 2 f (q ) is positive definite.

6 Theorem:[Second order sufficient condition] Let (i) 2 f be continuous in an open neighbourhood of q, (ii) f (q ) = 0 and (iii) 2 f (q ) be positive definite. Then q is a strict local minimiser of f. Theorem: When f is convex, any local minimizer is a global minimizer. If in addition f is differentiable, then any stationary point q is a global minimiser. Stationary point: any point where the derivative is zero.

7 Recall: Line search We are after a direction vector d, along which the gradient descends. Descend direction d: d T f (q) < 0. Then seek a constant α to construct q = q + αd, such that α solves min αf (q + αd). Iteratively (put a subscript k).

8 Recall: Depending on the choice of the search direction d k, the following methods are mostly used: Steepest descent: d k = f (q k ) (linear convergence) Newton s method: d k = 2 f 1 f (q k ) (quadratic convergence) Quasi-Newton method: d k = B 1 k f (q k) (super-linear convergence) Here B k is an approximation of the Hessian.

9 Example of a unconstrained problem from 1788 I An excellent article: Martin Gander, Felix Kwox, Gerhard Wanner Constrained optimization: From Lagrangian Mechanics to Optimal Control and PDE constraints R. Hoppe (ed.) Optimization with PDE constraints, Lecture Notes in Computational Science and Engineering, 101, Springer 2014.

10 Example of a unconstrained problem from 1788 II Example: Bernoulli s regle, discussed by Lagrange in 1788: A point mass, attached to three forces. The forces P, Q, R cause (infinitely) small displacements dp, dq, dr.

11 Example of a unconstrained problem from 1788 III As a condition of equilibrium we get Pdp + Qdq + Rdr = 0. The terms are called energies by Bernoulli and moments by Lagrange. Important: this is an unconstrained problem and the displacements are independent on each other!

12 Example of a unconstrained problem from 1788 IV Wrt a Cartesian coordinate system, introduce: p = (x x 1 ) 2 + (y y 1 ) 2 + (z z 1 ) 2 q = (x x 2 ) 2 + (y y 2 ) 2 + (z z 2 ) 2 r = (x x 3 ) 2 + (y y 3 ) 2 + (z z 3 ) 2 dp = 1 p ((x x 1)dx + (y y 1 )dy + (z z 1 )dz) dq = 1 q ((x x 2)dx + (y y 2 )dy + (z z 2 )dz) dr = 1 r ((x x 3)dx + (y y 3 )dy + (z z 3 )dz) The equilibrium condition becomes Xdx + Yxy + Zdz = 0 with X = P x x 1 p + Q x x 2 q + R x x 3 r, analogously for Y and Z. As the point of mass is free to take any position in space, then dx, dy, dz are independent. Thus, the condition of equilibrium becomes X = 0, Y = 0, Z = 0.

13 Constrained optimization problems I min f (q, y) q,y subject to c(q, y) = 0, where q R m, y R n, f : R m+n R, c : R m+n R. In general the constraint is a nonlinear equation. Many sources state at this point the following: The Lagrangian functional for the above problem is defined as L(q, y, λ) = f (q, y) + λ T c(q, y), where λ R m is the vector of Lagrange multipliers or the adjoint variables.

14 Constrained optimization problems II Next to it comes the statement: the necessary optimality condition leads to the following system of equations to be solved: L f y y c λt y = 0 L q f q c λt q = 0 L c(q, y) = 0 λ What is the Lagrange multiplier and why the Lagrangian is constructed?!

15 Example of a constrained problem from 1788 I The discovery of the Lagrange Multiplier method Constraint: the point mass should not leave the surface! The displacements are NOT independent of each other.

16 Example of a constrained problem from 1788 II The displacements are restricted to the tangent space of L = 0, i.e., dl = L x L L dx + dy + y z dz = 0 Thus, the vectors [ L x, L y, L z ] and [dx, dy, dz] are orthogonal. However, the balance equation must also be satisfied: Xdx + Ydy + Zdz = 0. Therefore, the vectors [X, Y, Z] and [ L x, L y, L z ] are parallel, thus, there exist a constant λ, such that X + λ L x = 0, Y + λ L y = 0, Z + λ L z = 0. Thus, Xdx + Ydy + Zdz + λdl = 0.

17 Example of a constrained problem from 1788 III From this point on, adding another constraint becomes elementary: Pdp + Qdq + Rdr + + λdl + µdm + = 0 This is the method of Multipliers, Lagrange OBS: The meaning of λ is that it tires the point to the surface (ensures that the constraint is satisfied).

18 How all this is related to minimum and maximum? Lagrange noted: if a function F (x, y, z) is such that for some independent dx, dy, dz we have F F F dx + dy + x y z dz = 0 then we must have F x = F y = F z = 0. Then, F must have min or max at (x, y, z) (in the unconstrained case). Analogously, the conditions in the constrained case mean that we are minimizing/maximizing the function F (x, y, z) + λl(x, y, z) which is the Lagrangian functional.

19 Variational problems I Optimization problems, where the unknown is not only a scalar value but a function b min J, J = y a F (x, y(x), y x )dx. F is a given function, y = y(x) is unknown.

20 Variational problems II An example of constrained variational problem Bernoulli gave to his brother (1697) (oldest known such): Given two points A and B, find a curve of a given length L, such that the area ADEFGA is maximized. In addition, for any distance y, g(y) is given.

21 Variational problems III In math terms, F A g(y(x))dx max subject to F A 1 + p 2 dx = L. The corresponding Lagrangian for the case A = 0, F = 1 L = 1 0 [ g(y) + λ( ] 1 + p 2 L) dx.

22 Solving opt.control problems with Lagrange multipliers I Example: Consider J = b a k(x, y(x), u(x))dx min s.t. dy dx = f (x, y, u), y(a) = y a, y(b) = y b. Now we need to determine two functions, y (state) and u (control). As x varies, there are infinite number of constraints that control the behaviour of y. Introduce λ(x).

23 Solving opt.control problems with Lagrange multipliers II Multiply the constraint dy dx f (x, y, u) = 0 by λ and insert in the integral: L(x, y, p, u, λ) = b a [ ] k(x, y(x), u(x)) + (p T + f T )λ(x) dx p is the derivative of y. The necessary optimality conditions read: L = 0 : y f (x, y, u) = 0 λ L y L y p = 0 : λ = k y f T y λ L u = 0 : k u f T u λ = 0 (1) Unknowns: y, u, λ - functions!

24 Lagrange multipliers in finite-dimensional problems I Consider a somewhat simpler setting min x f (x) subject to g(x) = 0, where f : R n R (objective) g : R m R, (constraint), x R n, m < n. We want to eliminate the constraints. To this end, partition x = (y, u, y R m, u R n m. Since g(x) = 0, applying the implicit function theorem we can eliminate u and express y = y(u). Then we have to minimize a problem of reduced dimension, min f (y(u), u). x

25 Lagrange multipliers in finite-dimensional problems II The necessary conditions for a local minimum: f u = f y y u + f u = (Y u T y f + u f ) T = 0. where Y u : R n m R m (n m) is the Jacobian of the implicit function y(u). Thus, the necessary optimality condition is a small system of equations of n m unknowns of the vector u only. Very limited cases when we can construct y(u).

26 Lagrange multipliers in finite-dimensional problems III Implicit function theorem

27 Lagrange multipliers in finite-dimensional problems IV In practice we work with the complete optimality system: Yu T y f + u f = 0 n m eq. Yu T Gy T + Gu T = 0 m(n m) eq. g = 0 m eq. Here Gu T : R n R m (n m) is the Jacobian of g wrt u and Gy T : R n R m (n m) is the Jacobian of g wrt u. In total, n + m(n m) equations!

28 Apply Lagrange multipliers: I Miraculous effect! Assume that G y is invertible and introduce λ = Gy T y f. Multiply the matrix equation from the right: Yu T Gy T λ + Gu T λ = Yu T Gy T Gy T y f + Gu T λ = Yu T y f + Gu T λ = 0 Add to Y T y y f + u f = 0 and we obtain an equivalent system of

29 Apply Lagrange multipliers: II first order necessary conditions that is much simpler: In total; n + m equations! y f + Gu T λ = 0 n m eq. u f + Gu T λ = 0 m eq. g = 0 m eq. The key observation: The simpler necessary optimality system is obtained from the Lagrangian functional L(y, u, λ) = f (y, u) + g(y, u) T λ!

30 Apply Lagrange multipliers: III OBS; New difficulty - the control cannot be arbitrarily large, i.e., u does not vary freely but only in some closed set U.

31 Pontryagin s maximum principle In the paper by Gander et al. a connection is made between Lagrange multiplies framework and Pontryagin s maximum principle. (Interesting, but out of the scope of this course.)

32 An example of a variational problem (to get the flavour) Given a system with a state variable y = y(t) R and a control variable u = u(t) R, described by y = u, y(0) = y 0, subject to box constraints u(t) 1 for each t. Find u such that y(1) = 1/2 and the following cost functional is minimized J = y 2 dt.

33 PDE-constrained optimal control problems

34 PDE-constrained optimal control problems Given a functional J to minimize, deal with a constraint, that is a PDE. The ultimate problem to solve Control of various processes (oil reservoir simulations, name it..) Shape and topology optimization (airfoil) Inverse problems (parameter estimation with all flavours - uncertainty, noisy data,...)

35 OPT-PDE The model problem: min J (y, u) subject to L(y, u) = 0, y,u where L is a differential operator. Many questions arise: How to define the cost functional to ensure well-posed problem? Should we first optimize, then discretize or vice versa? How to handle additional constraints? How to solve the arising linear or nonlinear algebraic systems?

36 OPT-PDE: variations min J (y, u; m) subject to L(y, u; m) = 0, y,u where m is a vector of parameters. min J (Q(y), u) subject to L(y, u) = 0, y,u Q(y) is some function of y, for instance, it takes only y on the boundary of the domain.

37 OPT-PDE: variations How to handle time-dependent problems? We obtain very large algebraic problems, in particular for large time intervals. Non-stationary heat equation Eddy current simulations Non-stationary Navier-Stokes equations

38 OPT-PDE: Discretize Optimize? The problems mentioned so far are continuous, where the variable functions live in some Hilbert space. The important message is that in general DO and OD do not commute. O-D: we need to discretize the derivatives of the Lagrangian that may not be true gradients of any objective function. Thus, we may get wrong descent directions. D-O: We may need to differentiate computational facilitators, for instance, meshes. Some aspects, such as mesh-independent convergence are only tractable via the continuous framework.

39 OPT-PDE: Discretize Optimize? Discrete PDEs are inherently large scale. Discrete OPT-PDEs (the KKT system) are even larger! Discretization and solution of OPT-PDEs should not be considered as independent, but rather we should see how to interwine those in order to obtain efficient solution algorithms.

40 Optimization and regularization The regularization can be seen as reformulation that is needed to ensure well-posed formulation of the OPT-PDE problem. In other words, to guarantee unique and stable solution, we add some regularization to the cost functional. J = 1 2 y y d β 2 u 2 2 J = 1 2 Q(y) d β 2 R(m) Q(y) part of the domain d given data R(m) how well data should be fitted Tikhonov regularization; comment on the value of β.

41 Regularization, cont. What norms to use depends on the problem at hand. L 1 -regularization and sparse controls Moreau-Yosida regularization Lavrentiev regularization Important: What is the effect of the type of the regularization on the (condition of the) so-arising algebraic systems to solve.

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Reading group: Calculus of Variations and Optimal Control Theory by Daniel Liberzon

Reading group: Calculus of Variations and Optimal Control Theory by Daniel Liberzon : Calculus of Variations and Optimal Control Theory by Daniel Liberzon 16th March 2017 1 / 30 Content 1 2 Recall on finite-dimensional of a global minimum 3 Infinite-dimensional 4 2 / 30 Content 1 2 Recall

More information

Numerical Optimization Algorithms

Numerical Optimization Algorithms Numerical Optimization Algorithms 1. Overview. Calculus of Variations 3. Linearized Supersonic Flow 4. Steepest Descent 5. Smoothed Steepest Descent Overview 1 Two Main Categories of Optimization Algorithms

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Arc Search Algorithms

Arc Search Algorithms Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers

Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers Rafikul Alam Department of Mathematics IIT Guwahati What does the Implicit function theorem say? Let F : R 2 R be C 1.

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions. Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require

More information

MATH529 Fundamentals of Optimization Constrained Optimization I

MATH529 Fundamentals of Optimization Constrained Optimization I MATH529 Fundamentals of Optimization Constrained Optimization I Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 26 Motivating Example 2 / 26 Motivating Example min cost(b)

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

Lecture 14: October 17

Lecture 14: October 17 1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Max Margin-Classifier

Max Margin-Classifier Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Course Outline. 2. Vectors in V 3.

Course Outline. 2. Vectors in V 3. 1. Vectors in V 2. Course Outline a. Vectors and scalars. The magnitude and direction of a vector. The zero vector. b. Graphical vector algebra. c. Vectors in component form. Vector algebra with components.

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables. Chapter 2 First order PDE 2.1 How and Why First order PDE appear? 2.1.1 Physical origins Conservation laws form one of the two fundamental parts of any mathematical model of Continuum Mechanics. These

More information

(1 + 2y)y = x. ( x. The right-hand side is a standard integral, so in the end we have the implicit solution. y(x) + y 2 (x) = x2 2 +C.

(1 + 2y)y = x. ( x. The right-hand side is a standard integral, so in the end we have the implicit solution. y(x) + y 2 (x) = x2 2 +C. Midterm 1 33B-1 015 October 1 Find the exact solution of the initial value problem. Indicate the interval of existence. y = x, y( 1) = 0. 1 + y Solution. We observe that the equation is separable, and

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

Generalized Gradient Descent Algorithms

Generalized Gradient Descent Algorithms ECE 275AB Lecture 11 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 11 ECE 275A Generalized Gradient Descent Algorithms ECE 275AB Lecture 11 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

Lecture 13 - Wednesday April 29th

Lecture 13 - Wednesday April 29th Lecture 13 - Wednesday April 29th jacques@ucsdedu Key words: Systems of equations, Implicit differentiation Know how to do implicit differentiation, how to use implicit and inverse function theorems 131

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take

More information

Support Vector Machine (continued)

Support Vector Machine (continued) Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

Mechanical Systems II. Method of Lagrange Multipliers

Mechanical Systems II. Method of Lagrange Multipliers Mechanical Systems II. Method of Lagrange Multipliers Rafael Wisniewski Aalborg University Abstract. So far our approach to classical mechanics was limited to finding a critical point of a certain functional.

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Lecture Notes for Chapter 12

Lecture Notes for Chapter 12 Lecture Notes for Chapter 12 Kevin Wainwright April 26, 2014 1 Constrained Optimization Consider the following Utility Max problem: Max x 1, x 2 U = U(x 1, x 2 ) (1) Subject to: Re-write Eq. 2 B = P 1

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

MATH 4211/6211 Optimization Constrained Optimization

MATH 4211/6211 Optimization Constrained Optimization MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Mathematical optimization

Mathematical optimization Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 3 3.4 Differential Algebraic Systems 3.5 Integration of Differential Equations 1 Outline 3.4 Differential Algebraic Systems 3.4.1 Constrained Dynamics 3.4.2 First and Second

More information

Maxima/minima with constraints

Maxima/minima with constraints Maxima/minima with constraints Very often we want to find maxima/minima but subject to some constraint Example: A wire is bent to a shape y = 1 x 2. If a string is stretched from the origin to the wire,

More information

Math 273a: Optimization Basic concepts

Math 273a: Optimization Basic concepts Math 273a: Optimization Basic concepts Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 slides based on Chong-Zak, 4th Ed. Goals of this lecture The general form of optimization: minimize

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Second Order Optimality Conditions for Constrained Nonlinear Programming

Second Order Optimality Conditions for Constrained Nonlinear Programming Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics

Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Excerpt from the Proceedings of the COMSOL Conference 2009 Milan Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Fikriye Yılmaz 1, Bülent Karasözen

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information