Introduction to Optimization!

Similar documents
Minimization of Static! Cost Functions!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Minimization of Static! Cost Functions!

Minimization with Equality Constraints

Stochastic Optimal Control!

Stochastic and Adaptive Optimal Control

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

OPTIMAL CONTROL AND ESTIMATION

Minimization with Equality Constraints!

First-Order Low-Pass Filter!

Dynamic Optimal Control!

EE C128 / ME C134 Feedback Control Systems

First-Order Low-Pass Filter

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors

Singular Value Analysis of Linear- Quadratic Systems!

Performance Surfaces and Optimum Points

Aircraft Flight Dynamics!

Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

CHAPTER 2: QUADRATIC PROGRAMMING

EE221A Linear System Theory Final Exam

Translational and Rotational Dynamics!

Upon successful completion of MATH 220, the student will be able to:

Functions of Several Variables

Sufficient Conditions for Finite-variable Constrained Minimization

5 Handling Constraints

Overview of the Seminar Topic

Appendix A Taylor Approximations and Definite Matrices

Time-Invariant Linear Quadratic Regulators!

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Distributed and Real-time Predictive Control

Modeling and Control Overview

Robot Arm Transformations, Path Planning, and Trajectories!

Mathematical optimization

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Chapter 2 Optimal Control Problem

Nonlinear State Estimation! Extended Kalman Filters!

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only.

Gradient Descent. Dr. Xiaowei Huang

ECON 5111 Mathematical Economics

Robust Control of Cooperative Underactuated Manipulators

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Lecture 18: Optimization Programming

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Optimality Conditions

Dynamical system. The set of functions (signals) w : T W from T to W is denoted by W T. W variable space. T R time axis. W T trajectory space

September Math Course: First Order Derivative

Development of a Deep Recurrent Neural Network Controller for Flight Applications

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

OR MSc Maths Revision Course

Appendix 3. Derivatives of Vectors and Vector-Valued Functions DERIVATIVES OF VECTORS AND VECTOR-VALUED FUNCTIONS

Slovak University of Technology in Bratislava Institute of Information Engineering, Automation, and Mathematics PROCEEDINGS

Linear-Quadratic-Gaussian Controllers!

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Numerical Optimization. Review: Unconstrained Optimization

Linearized Equations of Motion!

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Nonlinear Optimization: What s important?

EIGENVALUES AND EIGENVECTORS 3

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Math 207 Honors Calculus III Final Exam Solutions

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Robotics. Control Theory. Marc Toussaint U Stuttgart

FINANCIAL OPTIMIZATION

Alternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Suppose that we have a specific single stage dynamic system governed by the following equation:

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Mathematical Theory of Control Systems Design

Constrained optimization: direct methods (cont.)

Partially Observable Markov Decision Processes (POMDPs)

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Control of Mobile Robots

Surface x(u, v) and curve α(t) on it given by u(t) & v(t). Math 4140/5530: Differential Geometry

Mathematical Economics. Lecture Notes (in extracts)

Optimality conditions for Equality Constrained Optimization Problems

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

, respectively to the inverse and the inverse differential problem. Check the correctness of the obtained results. Exercise 2 y P 2 P 1.

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

minimize x subject to (x 2)(x 4) u,

Design of Norm-Optimal Iterative Learning Controllers: The Effect of an Iteration-Domain Kalman Filter for Disturbance Estimation

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

ECE580 Partial Solution to Problem Set 3

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Lecture 3: Basics of set-constrained and unconstrained optimization

Generalization to inequality constrained problem. Maximize

Nonlinear System Analysis

Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems.

The Steepest Descent Algorithm for Unconstrained Optimization

ECE557 Systems Control

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems

Interior-Point Methods for Linear Optimization

Transcription:

Introduction to Optimization! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2017 Optimization problems and criteria Cost functions Static optimality conditions Examples of static optimization Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae345.html 1 Typical Optimization Problems! Minimize the probable error in an estimate of the dynamic state of a system! Maximize the probability of making a correct decision! Minimize the time or energy required to achieve an objective! Minimize the regulation error in a controlled system! Estimation!! Control! 2

Optimization Implies Choice! Choice of best strategy! Choice of best design parameters! Choice of best control history! Choice of best estimate! Optimization provided by selection of the best control variable 3 Criteria for Optimization! Names for criteria! Figure of merit! Performance index! Utility function! Value function! Fitness function! Cost function, J! Optimal cost function = J*! Optimal control = u*! Different criteria lead to different optimal solutions! Types of Optimality Criteria! Absolute! Regulatory! Feasible 4

Minimize Absolute Criteria Achieve a specific objective, such as minimizing the required time, fuel, or financial cost to perform a task What is the control variable? 5 Optimal System Regulation Design controller to minimize tracking error, "x, in presence of random disturbances Passive Damper Active Control 6

Feasible Control Logic Find feedback control structure that guarantees stability (i.e., that prevents divergence) Single Inverted Pendulum http://www.youtube.com/watch?v=mi-tek7hvzs Double Inverted Pendulum http://www.youtube.com/watch?v=8hddzkxnmey 7 Desirable Characteristics of a Cost Function! Scalar! Clearly defined (preferably unique) maximum or minimum! Local! Global! Preferably positive-definite (i.e., always a positive number) 8

Static vs. Dynamic Optimization! Static! Optimal state, x*, and control, u*, are fixed, i.e., they do not change over time: J* = J(x*, u*)! Functional minimization (or maximization)! Parameter optimization! Dynamic! Optimal state and control vary over time: J* = J[x*(t), u*(t)]! Optimal trajectory! Optimal feedback strategy! Optimized cost function, J*, is a scalar, real number in both cases 9 Deterministic vs. Stochastic Optimization! Deterministic! System model, parameters, initial conditions, and disturbances are known without error! Optimal control operates on the system with certainty! J* = J(x*, u*)! Stochastic! Uncertainty in system model, parameters, initial conditions, disturbances, and resulting cost function! Optimal control minimizes the expected value of the cost:! Optimal cost = E{J[x*, u*]}! Cost function is a scalar, real number in both cases 10

Cost Function with a Single Control Parameter! Tradeoff between two types of cost: Minimum-cost cruising speed! Fuel cost proportional to velocity-squared! Cost of time inversely proportional to velocity! Control parameter: Velocity 11 Tradeoff Between Time- and Fuel-Based Costs! Nominal Tradeoff! Fuel Cost Doubled! Time Cost Doubled 12

! Minimum Cost Functions with Two Control Parameters! Maximum 3-D plot of equal-cost contours (iso-contours) 2-D plot of equal-cost contours (iso-contours) 13 Real-World Topography Local vs. global maxima/minima Robustness of estimates 14

Cost Functions with Equality Constraints Stay on the trail Equality constraint may allow control dimension to be reduced c( u 1,u 2 ) = 0! u 2 = fcn( u 1 ) ( ) = J u 1, fcn( u 1 ) J u 1,u 2 ( )!" # = J u 1 15 Cost Functions with Inequality Constraints Person: Stay outside the fence Horse: Stay inside the fence 16

Necessary Condition for Static Optimality Single control dj du u =u* = 0 i.e., the slope is zero at the optimum point Example: J = ( u! 4) 2 dj du = 2 u! 4 ( ) = 0 when u* = 4 17 Necessary Condition for Static Optimality!J!u u=u* = " # Multiple controls!j!j...!j = 0 u=u* Gradient i.e., all slopes are concurrently zero at the optimum point Example: ( ) 2 + ( u 2! 8) ( ) = 0 when u 1 * = 4 ( ) = 0 when u 2 * = 8 J = u 1! 4 dj du 1 = 2 u 1! 4 dj du 2 = 2 u 2! 8 " J " u u=u* = # " J " J "u 1 "u 2 ( ( # 4 u=u*= ( 8 = # 0 0 18

But the Slope can be Zero for More than One Reason Minimum Maximum Either Inflection Point 19 d 2 J du 2 u =u* Sufficient Condition for Static Optimum > 0! Single control Minimum Satisfy necessary condition plus i.e., curvature is positive at optimum Example: J = ( u! 4) 2 dj du = 2 u! 4 d 2 J du = 2 > 0 2 ( ) Maximum Satisfy necessary condition plus d 2 J du 2 u =u* < 0 i.e., curvature is negative at optimum Example: J =!( u! 4) 2 dj du =!2 u! 4 d 2 J du =!2 < 0 2 ( ) 20

Sufficient Condition for Static Minimum Multiple controls! Satisfy necessary condition! plus! J! u u=u* = " #! J! J...! J = 0 u=u* Hessian matrix "! 2 J =! u 2 u=u* #! 2 J 2! 2 J...! 2 J! 2 J! 2 J! 2 J... 2............! 2 J! 2 J...! 2 J 2 u=u* > 0! what does it mean for a matrix to be greater than zero? 21! 2 J! Q > 0 if Its Quadratic Form, xt Qx, is Greater than Zero x T Qx! Quadratic form Q : Defining matrix of the quadratic form "# ( 1! n) ( n! n) ( n! 1) = " # ( 1! 1)! dim(q) = n x n! Q is symmetric! x T Qx is a scalar 22

Quadratic Form of Q is Positive* if Q is Positive Definite! Q is positive-definite if! All leading principal minor determinants are positive! All eigenvalues are real and positive! 3 x 3 example! # Q = # # " q 11 q 12 q 13 q 21 q 22 q 23 q 31 q 32 q 33 q 11 > 0, q 11 q 12 q 21 q 22 > 0, q 11 q 12 q 13 q 21 q 22 q 23 q 31 q 32 q 33 > 0 ( ) = ( s! " 1 )( s! " 2 )( s! " 3 ) * except at det si! Q x = 0 23 " 1," 2," 3 # 0 Minimized Cost Function, J*! Gradient is zero at the minimum! Hessian matrix is positive-definite at the minimum! Expand the cost in a Taylor series J ( u * +!u) " J ( u *) +!J ( u *) +! 2 J ( u *) +... " J!J ( u* ) =!u T = 0 " u u=u*! 2 J u* ( ) = 1 2!uT " 2 J # " u 2 (!u ) 0 u=u*! First variation is zero at the minimum! Second variationis positive at the minimum 24

How Many Maxima/Minima does the Mexican Hat Have? z = sinc( R)! sin R R!J!u u=u* = " #!J!J...!J = 0 u=u* R = u 1 2 + u 2 2! 2 J u=u* " = #! 2 J 2! 2 J...! 2 J! 2 J... 2............! 2 J! 2 J...! 2 J! 2 J! 2 J 2 u=u* > < 0! Prior example One maximum Wolfram Alpha maximize(sinc(sqrt(x^2+y^2))) 25 Static Cost Functions with Equality Constraints! Minimize J(u ), subject to c(u ) = 0! dim(c) = [n x 1]! dim(u ) = [(m + n) x 1] 26

Two Approaches to Static Optimization with a Constraint! u= u 1 # " u 2 1.# Use constraint to reduce control dimension 2.# Augment the cost function to recognize the constraint J A ( u ) = J ( u ) +! T c( u ) Example : min J u 1,u 2 subject to c( u ) = c( u 1,u 2 ) = 0! u 2 = fcn u 1 J ( u ) = J ( u 1,u 2 ) = J u 1, fcn( u 1 )!, an unknown constant ( )! has the same dimension as the constraint dim!!" # = J u 1 ( ) = dim( c) = n "1 27 ( ) Solution: First Approach Cost function J = u 1 2! 2u 1 u 2 + 3u 2 2! 40 Constraint c = u 2! u 1! 2 = 0 "u 2 = u 1 + 2 28

Solution Example: Reduced Control Dimension Cost function and gradient with substitution J = u 1 2! 2u 1 u 2 + 3u 2 2! 40 = u 1 2! 2u 1 u 1 + 2 = 2u 1 2 + 8u 1! 28 ( ) + 3( u 1 + 2) 2! 40 " J "u 1 = 4u 1 + 8 = 0; u 1 =!2 Optimal solution u 1 * =!2 u 2 * = 0 J* =!36 29 Solution: Second Approach! Partition u into a state, x, and a control, u, such that! dim(x) = dim[c(x)] = [n x 1]! dim(u) = [m x 1]! Add constraint to the cost function, weighted by Lagrange multiplier,! dim() = [n x 1]! c is required to be zero when J A is a minimum! u= x # " u J A ( u ) = J ( u ) +! T c( u ) ( x,u) = J ( x,u) +! T c( x,u) J A ( ) = c# x u c u! " = 0 30

Solution: Adjoin Constraint with Lagrange Multiplier Gradient with respect to x, u, and! is zero at the optimum point n equations!j A!x =!J!x + "T!c!x = 0 m equations!j A!u =!J!u + "T!c!u = 0 n equations!j A!" = c = 0 31 Simultaneous Solutions for State and Control! (2n + m) values must be found (x,, u)! Use first equation to find form of optimizing Lagrange multiplier (n scalar equations)! Second and third equations provide (n + m) scalar equations that specify the state and control! * T = " # J # x # c # x( ) * # c!* = ", # x( ) + "1 - /. "1 T # J # x ( ) T! J! u #! J! x! J! u + "! c *T! u = 0! c! x( ) c( x,u) = 0 #1! c! u = 0 32

!J!x!J!u Solution Example: Second Approach Cost function J = u 2! 2xu + 3x 2! 40 Constraint c = x! u! 2 = 0 Partial derivatives = "2u + 6x = 2u " 2x!c!x = 1!c!u = "1 33 Solution Example: Second Approach! From first equation!* = 2u " 6x! From second equation! From constraint ( 2u! 2x) + ( 2u! 6x) (!1) u =!2! Optimal solution " x = 0 x* = 0 u* =!2 J* =!36 34

Next Time:! Numerical Optimization! 35