Minimization with Equality Constraints!

Size: px
Start display at page:

Download "Minimization with Equality Constraints!"

Transcription

1 Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 State and Control Equality Constraints Pseudoinverse State and Control Inequality Constraints Numerical Methods Parametric optimization Penalty function and collocation Neighboring extremal Multiple shooting Gradient Newton-Raphson Steepest descent Optimal reatment of an Infection Copyright 2017 by Robert Stengel. All rights reserved. For educational use only Minimization with Equality Constraints 2

2 Minimization with Equality Constraint on State and Control min ut c xt,ut subject to [ ] J = " xt f L xt,ut xt = f[xt,ut], x t 0 t f t 0 Dynamic Constraint State/Control Equality Constraint given [ ] 0 in t 0, t f ; dimc = r "1 m "1 dt dimx = n 1 dimf = n 1 dimu = m 1 3 Aeronautical Example: Longitudinal Point-Mass Dynamics 1 V = C D 2 "V 2 S m gsin = 1 * 1 C L V 2 "V 2 -, S m gcos /. h = V sin r = V cos m = SFC x 1 = V : Velocity, m/s x 2 = : Flight path angle, rad x 3 = h : Height, m x 4 = r : Range, m x 5 = m : Mass, kg 4

3 Aeronautical Example: Variables and Constants u 1 = : hrottle setting, u 2 = " : Angle of attack, rad = Air density = SL e "h,kg/m 3 = maxsl e "h : hrust, N Drag coefficient C D = C Do C L 2 C L = C L = Lift coefficient S = Reference area, m 2 m = Vehicle mass, kg g = Gravitational acceleration, m/s 2 SFC = Specific Fuel Consumption, g/kn-s 5 Path Constraint Included in the Cost Function Hamiltonian Constraint must be satisfied at every instant of the trajectory Dimension of the constraint " dimension of the control t f { } J 1 = " xt f L 1 t[ f xt ] µ c dt t 0 c[ xt,ut ] 0 in t o, t f he constraint is adjoined to the Hamiltonian H L 1 f µ c dimx = dimf = dim = n " 1 dimu = m " 1 dimc = dimµ = r " 1, r m 6

4 Euler-Lagrange Equations Including Equality Constraint t f = "[xt ] f "x = " H[x,u,,c,µ,t] x * = " L x f x c -, µ x. / * = " L x F 0 c x4 5, - µ./ H[x,u,",c,µ,t] u 0* L - =, u. / 12 G * " c -, u. / 3 µ 45 = 0 c[ xt,ut ] 0 in t o, t f dimx = dim = n " 1 dim F dim G = n " n = n " m dimu = m " 1 dimc = dimµ = r " 1, r m 7 No Optimization When r = m Control entirely specified by constraint m unknowns, m equations c[ xt,ut ] 0 " ut = fcn[ xt] Example : State Control c = Ax t Bu t = 0; dimx = n 1; dimc = dimu = m 1 dima = m n; dimb = m m = "B "1 Ax t u t Constraint Lagrange multiplier is unneeded but can be expressed: from dh/du = 0, µ = "c "u * "L "u, - G./ " c u is square and non-singular 8

5 Effect of Equality Constraint Dimensionality: r < m MINIMIZE FUEL AND CONROL USE WHILE MAINAINING CONSAN FLIGH PAH ANGLE min ut t f J = q m 2 r 1 u r 2 u 2 dt t 0 c[ xt,ut ] 0 = " = 1 V t, C L 1 2 V 2. ts *.. m -. / 1 gcos" t = desired 9 Effect of Constraint Dimensionality: r < m dimx = n 1 dimu = m 1 dimc = r 1 " c u is not square when r < m " c u is not strictly invertible hree approaches to constrained optimization Algebraic solution for r control variables using an invertible subset of the constraint if it exists Pseudoinverse of control effect [or weighted pseudoinverse BD] Soft constraint 10

6 1 st Approach: Algebraic Solution for r Control Variables Example 1:State Control dimx = n 1; dima r = r n dimu = m 1; dimu r = r 1; dimb r = r r c A r x t B r u r t = 0; detb r " 0 t = B 1 r Ax t u r Example 2 : Control only dimu = m 1; dimu r = r 1; dimb 1 = r r; dimb 2 = r m " r c B 1 B 2 u r u m"r u r = B u B u = 0; detb 0 1 r 2 m"r 1 t = "B "1 1 B 2 u m"r t 11 2 nd Approach: Satisfy Constraint Using Left Pseudoinverse: r < m " c * u -, L is the left pseudoinverse of control sensitivity " dim c * u -, Lagrange multiplier L = r. m "c µ L = "u *, -. L "L "u *, G / -. 12

7 Pseudoinverse of Matrix Matrix inverse if r = m, A is square and non-singular y = Ax x = A 1 y dimx = r 1 dimy = m 1 If r m, A is not square, maximum rank of A is r or m, whichever is smaller Use pseudoninverse of A x = A y = A y See 13 Left and Right Pseudoinverse r < m, use Left pseudoinverse Ax = y A Ax = A y 1 A y 1 A x = A A A L A A x = A L y Averaging solution dim A = m r = r r = m m dim A A dim AA r > m, use Right pseudoinverse Ax = y = Iy Ax = AA AA 1 y = Iy Ax = A " A AA 1 y x = A AA 1 y A R A AA 1 x = A R y Minimum-Euclidean-error-norm solution 14

8 Necessary Conditions Use Left Pseudoinverse for r < m Optimality conditions t f = "[xt ] f "x * = " L x F c, x µ L - /. " * L u G " c u plus [ ] 0 c xt,ut µ L,. = 0-15 hird Approach: Penalty Function Provides Soft State-Control Equality Constraint: r < m L L original c c : Scalar penalty weight Euler-Lagrange equations are adjusted accordingly *,, L orig t f = "[xt ] f "x u c 2"c u - G / = 0./ = " L x F = " / * / L orig x c, 2c x -. c[ xt,ut ] 0 F

9 Equality Constraint on State Alone t f { } J 1 = " xt f L x,u 1 t" f x,u xt µ tc x dt t 0 c[ xt ] 0 in t 0, t f Hamiltonian H L 1 f µ c Constraint is insensitive to control perturbations to first order c = "c "x x "c "u u = "c "x x 0 17 Equality Constraint Has No Effect on Optimality Condition " H u 0 L, = 2 * u L, = 2 * u -. 1 G / c, * u-. 3 G / µ 5 4 Solution: Incorporate time-derivative of c[xt] in optimization 18

10 Introduce ime-derivative of Equality Constraint Define c[xt] as the zeroth-order equality constraint dc 0 [ ] c 0 [ xt ] 0 c xt Compute first-order equality constraint [ xt ] dt [ = c0 xt ] t [ ] = c0 xt c0 xt t x c 1 xt,ut [ c0 xt ] dxt x dt [ ] [ ] = 0 f [ xt,ut ] 19 Include Derivative in Optimality Condition u H x,u," Includes derivative of equality constraint u * L x,u - =,. / G x,u" t * c1 x,u Must satisfy zeroth-order constraint at one point on path e.g., c 0 xt o [ ] 0 or c 0 " xt f, 0 u If c 1 /u = 0, differentiate again, and again, See Supplemental Material -. / µ 20

11 Minimization with Inequality Constraints 21 Hard Inequality Constraint Constraint not in force for entire trajectory rajectory must not pass beyond the constraint 22

12 Inequality Constraints Different constraints invoked at different times Some constraints never invoked 23 ransversality Corner Conditions Juncture between unconstrained and constrained points hree Weierstrass-Erdmann conditions = t 1 = H t 1 t 1 H t 1 "H "u t 1 = "H "u t

13 Corners and Discontinuities Continuous implies continuous ut at juncture u t 1 = u t 1 x t 1 = x t 1 Not a corner Discontinuous implies discontinuous ut at juncture u t 1 u t 1 " x t 1 x t 1 Corner 25 Soft Control Inequality Constraint L L original c c : Scalar penalty weight Scalar Example c[ ut ] = u u max u u min, u " u max, u min < u < u max, u u min 26

14 Numerical Optimization 27 Numerical Optimization Methods 28

15 Parametric Optimization min ut t f [ ] J = " xt f L xt,ut dt t 0 subject to xt = f[xt,ut], xt 0 given Control specified by a parameter vector, k No adjoint equations Examples ut = k ut = k 0 k 1 t k 2 t 2, k = k 0 k 1 k " m ut = k ut = k 0 k 1 t k 2 t 2, k = k 0 k 1 k " m t * ut = k 0 k 1 sin t f t, 0 k t * 2 cos t f t, 0, k = k 0 k 1 k " 2 29 Parametric Optimization min ut J = " xt f t f [ ] L xt,ut dt t 0 subject to xt = f[xt,ut], xt o given Necessary and sufficient conditions for a minimum Use static search algorithm to find minimizing control parameter, k J k = 0 2 J k >

16 min ut J = " xt f t f [ ] L xt,ut dt t 0 subject to xt = f[xt,ut], xt 0 given u t k 0 k 1 t k 2 t 2 Parametric Optimization Example 1 V = max C D 2 "V 2 S m gsin 1 2 "V 2 S = 1 0* 1 C L k 0 k 1 t k 2 t 2 V 2, h = V sin r = V cos m = SFC -./ m gcos a = " J k = 0 2 J k > 0 2 k 0 k 1 k 2 31 Penalty Function Method Balakrishnan s Epsilon echnique No integration of the dynamic equation Parametric optimization of the state and control history xt xk x,t ut uk u,t dimk x n dimk u m Augment the integral cost function by the dynamic equation error min J = " xt f,t f wrt ut,xt. / 0 L xt,ut,t t f 4 t o [ ] 1 { [ ] - xt } { } *, f xt,ut,t 1 / is the penalty for not satisfying the dynamic constraint dt 32

17 Penalty Function Method Choose reasonable starting values of state and control parameters e.g., state and control satisfy boundary conditions Evaluate cost function J 0 = " x 0 t f. / L x 0t,u t 0 t f 4 t o [ ] 1 aylor, Smith Iliff, 1969 { [ ] - x 0 t} { } *, f x 0 t,u 0 t dt Update state and control parameters e.g., steepest descent k xi1 = k xi " J k x k x =k xi k ui1 = k ui " J k u k u =k ui Re-evaluate cost with higher penalty Repeat to convergence J i J i1 J*, " 0, f xt,ut,t x i1 t xk xi1,t u i1 t uk ui1,t [ ] xt 33 Neighboring Extremal Method Shooting Method : Integrate both state and adjoint vector forward in time x k1 t = f[x k1 t,u k t], x 0 t 0 given, initial guess for u 0 t k 1 t = " L t x k k 1 tf k t, k t 0 given Control satisfies necessary condition 3 { } u k1 t =fcn H[x t,u t," k k k1 t,t] u = arg L u k t " k1 tg k t 0... but how do you know the initial value of the adjoint vector? 34

18 Neighboring Extremal Method All trajectories are optimal i.e., extremals for some cost function because H u = H u = L u " G = 0 Integrating state equation computes a value for " x t f t f xt f = xt 0 f[x k1 t,u k t]; " xt f " xt f x t 0 = * t f Use a learning rule to estimate the initial value of the adjoint vector, e.g., k 1 t 0 = k t 0 " k t f " desired 35 Multiple Shooting Method Divide the total time interval into n subintervals, defined by grid points t 0 < t 1 < t 2 < < t N1 < t N = t f Optimize between grid points, e.g., min J u t N n=1 min J u t n t n,t n"1 Constrain/correct/iterate final and initial state at grid points Option: Implement with discrete-time model 36

19 Parametric Optimization: Collocation Admissible controls occur at discrete times, k Cost and dynamic constraint are discretized Pseudospectral Optimal Control: State and adjoint points may be connected by basis functions, e.g., Legendre polynomials, B-splines, Continuous solution approached as time interval decreased min u k J = " x k f k f 1 [ ] L x,u k k k =0 subject to x k 1 = f k [x k,u k ], x 0 given 37 Legendre Polynomials Solutions to Legendre s differential equation 38

20 Parametric Optimization with Legendre Polynomials Optimizing Control: Find minimizing values of k n = k 0 P 0 x k 1 P 1 x k 2 P 2 x k 4 P 4 x k 5 P 5 x u x k 3 P 3 x t x t f t o Can combine with optimal shooting method P 5 P 4 P 2 P 3 P 0 P 1 x = 1 x = x x = 1 2 3x2 1 x = 1 2 5x3 3x x = x4 30x 2 3 x = x5 70x 3 15x 39 Gradient-Based Methods 40

21 Gradient-Based Search Algorithms 41 Gradient-Based Search Algorithms Steepest Descent H u k 1 t = u k t " k u t u k 1 t = u k t " 2 H t "u 2 Newton Raphson k 1 k "H "u t Generalized Direct Search "H u k 1 t = u k t K k "u t k k 42

22 Numerical Optimization Using Steepest-Descent Algorithm Iterative bidirectional procedure Forward solution to find the state, x t Backward solution to find the adjoint vector, t x k t = f[x k t,u k 1 t], xt o given Steepest-descent adjustment of control history, u t Use educated guess for u 0 t on first iteration 43 k t = " H x t f = *[xt f ]., / - x 0 Numerical Optimization Using Steepest-Descent Algorithm k = " L x t tft k, [ E " L 2 and 1] Use x k-1 t and u k-1 t from previous step 44

23 Numerical Optimization Using Steepest-Descent Algorithm " H u k = * L u t tgt,k [ E - L 3] u k1 t = u k t " H u ut =uk t = u k t " L u * tgt k Use x t, t, and u t from previous step 45 Finding the Best Steepest-Descent Gain J 1k u k t " J 0k " u k t, 0 < t < t f : Best solution from the previous iteration k H k Calculate the gradient, H k u t, in 0 < t < t f u t, 0 < t < t f : Steepest - descent calculation of cost 1 46

24 Finding the Best Steepest-Descent Gain J 2k u k t 2" k H k J 0k J 1k J 2k u t, 0 < t < t f : Steepest - descent calculation of cost 2 = a 0 a 1 a 0 a 1 = a 0 a 1 " a 2 " 2 J " a 0 a 1 0 a " k a 2 " k 2 2" k a 2 2" k = 1 " k " k Solve for a 0, a 1, and a 2 Find "* that minimizes J " 2 1 2" k 2" k a 0 a 1 a 2 47 Finding the Best Steepest-Descent Gain Best steepest - descent calculation of cost J k1 u k1 t H = u k t " * k k Go to next iteration u t, 0 < t < t f 48

25 min ut Steepest-Descent Algorithm for Problem with erminal Constraint t f dt xt f J = " xt f L xt,ut t o [ ] " 0 scalar H C u = H 0 u " a b H 1, u. * - = 0 Chose u k1 t such that see Lecture 3 for a and b definitions Bryson Ho, 1969 u k1 t = u k t " H C u ut =uk t = u k t " L u G t* 0 t k 1 G kt* 1 t b k k xt f 49 Optimal reatment of an Infection 50

26 Model of Infection and Immune Response x 1 = Concentration of a pathogen, which displays antigen x 2 = Concentration of plasma cells, which are carriers and producers of antibodies x 3 = Concentration of antibodies, which recognize antigen and kill pathogen x 4 = Relative characteristic of a damaged organ [0 = healthy, 1 = dead] 51 Infection Dynamics Fourth-order ordinary differential equation, including effects of therapy control x 1 = a 11 a 12 x 3 x 1 b 1 u 1 w 1 x 2 = a 21 x 4 a 22 x 1 x 3 a 23 x 2 x 2 * b 2 u 2 w 2 x 3 = a 31 x 2 a 32 a 33 x 1 x 3 b 3 u 3 w 3 x 4 = a 41 x 1 a 42 x 4 b 4 u 4 w 4 52

27 Uncontrolled Response to Infection 53 Cost Function to be Minimized by Optimal herapy J = 1 2 p 11x 2 2 1f p 44 x 4 f 1 2 t f t o q 11 x 2 1 q 44 x 2 4 ru 2 dt radeoffs between final values, integral values over a fixed time interval, state, and control Cost function includes weighted square values of Final concentration of the pathogen Final health of the damaged organ 0 is good, 1 is bad Integral of pathogen concentration Integral health of the damaged organ 0 is good, 1 is bad Integral of drug usage Drug cost may reflect physiological cost side effects or financial cost 54

28 Examples of Optimal herapy Unit cost weights u 1 = Pathogen killer u 2 = Plasma cell enhancer u 3 = Antibody enhancer u 4 = Organ health enhancer 55 Next ime: Minimum-ime and -Fuel Problems Reading OCE: Section 3.5,

29 Supplemental Material 57 Left Pseudoinverse Example A = " 1 3 ; y = " 2 6 Ax = y, x = A A r < m 1 A y " x = " 1 3 * 1 3 " x = " 1 3, " 1 3 " 2 6 = = 2 Unique solution 58

30 Minimum-Error-Norm Solution Euclidean error norm for linear equation Ax y 2 2 = Ax y [ ] [ Ax y] Necessary condition for minimum error x Ax " y 2 = 2 Ax " y 2 [ ] = 0 dimx = r 1 dimy = m 1 r > m [ ] = 2 A " A AA 2 Ax y Express x as right pseudoinverse 1 y { } [ ] = 0 = 2 y y y { AA 1 y y} = 2 AA herefore, x is the minimizing solution, as long as AA is non-singular 59 Right Pseudoinverse Example A = " 1 3 ; y = 14 Ax = y, r > m x = A AA 1 y " x 1 x 2 Ax = y; " 1 3 " x 1 x 2 = " 3 " * " 1 3 = " = " = , " x 1 x 2 = 2 " 4 " x 1 x 2 At least two solutions satisfies the equation, but x 2 = 20 = 1.4 and x " = 19.6 Right pseudoinverse provides minimum - norm solution 60

31 No Optimization When r = m MAINAIN CONSAN VELOCIY AND FLIGH PAH ANGLE c[ xt,ut ] 0 = 1 2 V 2 S 2 0 = V = max " C Do C L * m gsin, 0 =, = 1. C L- - 1 V 2 V 2 1 S / m gcos, * * 4 ut = fcn[ xt] V 0 = V desired = desired 0 61 Examples of Equality Constraints c[ xt,ut ] 0 Pitch Moment = 0 = fcnmach Number, Stabilator rim Angle c[ ut ] 0 Stabilator rim Angle constant = 0 c xt [ ] 0 Altitude constant = 0 62

32 Example of Equality Constraint on State Alone MINIMIZE FUEL AND CONROL USE WHILE MAINAINING CONSAN ALIUDE min ut t f J = q m 2 r 1 u r 2 u 2 dt t 0 [ ] = c[ xt ] = 0 = h t h desired c xt,ut 63 State Equality Constraint Example dc 0 c[ xt ] c 0 [ xt ] = 0 = h t h desired No control in the constraint; differentiate [ xt ] dt c 1 xt [ = c0 xt ] t [ ] = 0 = h " t [ c0 xt ] f [ xt,ut ] x = V tsin" t Still no control in the constraint; differentiate again 64

33 dc 1 State Equality Constraint Example Still no control in the constraint; differentiate again [ xt ] dt [ = c1 xt ] t = 0 = h t = dv dt [ c1 xt ] f [ xt,ut ] c 2 [ xt ] x tsin t V t d sin t " dt, 1 = maxsl e "h C D 2 V 2 /. S * m gsin 1-0 sin t cos t,. - C L V 2 S / * m gcos State Equality Constraint Example Success Control appears in the 2 nd -order equality constraint c 2 * = 1 maxsl e h C D, " cos0 t * 1, " xt,u t " = h C L 1 2 h V 2 - ts. / m t V 2 - ts. / m t H L 1 f µ c 2 gsin0 t gcos0 t 2 2 sin0 t 0 th - and 1 st -order constraints satisfied at some point on the trajectory e.g., t 0 [ ] = 0 h t 0 = h desired [ ] = 0 " t 0 = 0 c 0 xt 0 c 1 xt 0 66

34 Control History Optimized with Legendre Polynomials Could be expressed as a Simple Power Series u * x = k * 0 P 0 x k * 1 P 1 x k * 2 P 2 x k * 4 P 4 x k * 5 P 5 x k 3 * P 3 x u * x = a * 0 a * 1 x a * 2 x 2 a 3 * x 3 a 4 * x 4 a 5 * x 5 a * 0 = k * * " 1 * " 3 0 k 2 2 k 4 8 a * 1 = k * * " 3 * " 15 1 k 3 2 k 5 8 a * * " 3 * " 30 2 = k 2 2 k 4 8 P 5 P 4 P 2 P 3 P 0 P 1 x = 1 x = x x = 1 2 3x2 1 x = 1 2 5x3 3x x = x4 30x 2 3 x = x5 70x 3 15x 67 Zero Gradient Algorithm for Quadratic Control Cost min ut J = " xt f t f L [ xt ] 1, 2 u tru t * dt t o [ ] = L xt H xt,ut,t " [ ] 1 2 u tru t H u t = H t = u u t Optimality condition: tf xt,ut R " tg t 0 [ ] 68

35 Zero Gradient Algorithm for Quadratic Control Cost H u t = H u t = u t t and k t R " tg t 0 Optimal control, u*t u* tr = " tg * t u* t = R 1 G * t" t are sub-optimal before convergence, But G k and optimal control cannot be computed in single step Chose u k1 t such that u k1 t = 1 " u k t " R 1 G k t k t " Relaxation parameter < 1 69 Stopping Conditions for Numerical Optimization Computed total cost, J, reaches a theoretical minimum, e.g., zero Convergence of J is essentially complete Control gradient, H u t, is essentially zero throughout [t o, t f ] erminal cost/constraint is satisfied, and integral cost is good enough H uk1 t f H uk1 J k1 = 0 J k1 > J k " t = 0 ± in " t o,t f or t H uk1 tdt = 0 t o k1 t f = 0 ", or k1 t f = 0 ± ", and L x t t f t o,u t dt < 70

36 Effects of Increased Drug Cost r =

Minimization with Equality Constraints

Minimization with Equality Constraints Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2015 State and Control Equality Constraints Pseudoinverse State and Control Inequality

More information

Dynamic Optimal Control!

Dynamic Optimal Control! Dynamic Optimal Control! Robert Stengel! Robotics and Intelligent Systems MAE 345, Princeton University, 2017 Learning Objectives Examples of cost functions Necessary conditions for optimality Calculation

More information

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for

More information

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries! Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2018 Copyright 2018 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html

More information

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries! Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2017 Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html

More information

Introduction to Optimization!

Introduction to Optimization! Introduction to Optimization! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2017 Optimization problems and criteria Cost functions Static optimality conditions Examples

More information

Linearized Equations of Motion!

Linearized Equations of Motion! Linearized Equations of Motion Robert Stengel, Aircraft Flight Dynamics MAE 331, 216 Learning Objectives Develop linear equations to describe small perturbational motions Apply to aircraft dynamic equations

More information

Stochastic and Adaptive Optimal Control

Stochastic and Adaptive Optimal Control Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal

More information

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors Time Response of Dynamic Systems Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Multi-dimensional trajectories Numerical integration Linear and nonlinear systems Linearization

More information

Alternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion

Alternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion Linearized Longitudinal Equations of Motion Robert Stengel, Aircraft Flig Dynamics MAE 33, 008 Separate solutions for nominal and perturbation flig paths Assume that nominal path is steady and in the vertical

More information

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2018 J = Static cost function with constant control parameter vector, u Conditions for

More information

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer

More information

Aircraft Flight Dynamics!

Aircraft Flight Dynamics! Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2016 Course Overview Introduction to Flight Dynamics Math Preliminaries Copyright 2016 by Robert Stengel. All rights reserved. For

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Static and Dynamic Optimization (42111)

Static and Dynamic Optimization (42111) Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:

More information

Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018

Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018 Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018 Course Overview Introduction to Flight Dynamics Math Preliminaries Copyright 2018 by Robert Stengel. All rights reserved. For

More information

Theory and Applications of Optimal Control Problems with Time-Delays

Theory and Applications of Optimal Control Problems with Time-Delays Theory and Applications of Optimal Control Problems with Time-Delays University of Münster Institute of Computational and Applied Mathematics South Pacific Optimization Meeting (SPOM) Newcastle, CARMA,

More information

Lösning: Tenta Numerical Analysis för D, L. FMN011,

Lösning: Tenta Numerical Analysis för D, L. FMN011, Lösning: Tenta Numerical Analysis för D, L. FMN011, 090527 This exam starts at 8:00 and ends at 12:00. To get a passing grade for the course you need 35 points in this exam and an accumulated total (this

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

ECE7850 Lecture 9. Model Predictive Control: Computational Aspects

ECE7850 Lecture 9. Model Predictive Control: Computational Aspects ECE785 ECE785 Lecture 9 Model Predictive Control: Computational Aspects Model Predictive Control for Constrained Linear Systems Online Solution to Linear MPC Multiparametric Programming Explicit Linear

More information

Time-Invariant Linear Quadratic Regulators!

Time-Invariant Linear Quadratic Regulators! Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

To keep things simple, let us just work with one pattern. In that case the objective function is defined to be. E = 1 2 xk d 2 (1)

To keep things simple, let us just work with one pattern. In that case the objective function is defined to be. E = 1 2 xk d 2 (1) Backpropagation To keep things simple, let us just work with one pattern. In that case the objective function is defined to be E = 1 2 xk d 2 (1) where K is an index denoting the last layer in the network

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

Nonlinear State Estimation! Extended Kalman Filters!

Nonlinear State Estimation! Extended Kalman Filters! Nonlinear State Estimation! Extended Kalman Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Deformation of the probability distribution!! Neighboring-optimal

More information

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015 Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4. Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation

More information

Suppose that we have a specific single stage dynamic system governed by the following equation:

Suppose that we have a specific single stage dynamic system governed by the following equation: Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

Chapter 5. Pontryagin s Minimum Principle (Constrained OCP)

Chapter 5. Pontryagin s Minimum Principle (Constrained OCP) Chapter 5 Pontryagin s Minimum Principle (Constrained OCP) 1 Pontryagin s Minimum Principle Plant: (5-1) u () t U PI: (5-2) Boundary condition: The goal is to find Optimal Control. 2 Pontryagin s Minimum

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Static and Dynamic Optimization (42111)

Static and Dynamic Optimization (42111) Static and Dynamic Optimization (42111) Niels Kjølstad Poulsen Build. 303b, room 016 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only.

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. Control Systems Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 2017 Analog vs. digital systems Continuous- and Discretetime Dynamic Models Frequency Response Transfer Functions

More information

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS Math 5B Final Exam Review Session Spring 5 SOLUTIONS Problem Solve x x + y + 54te 3t and y x + 4y + 9e 3t λ SOLUTION: We have det(a λi) if and only if if and 4 λ only if λ 3λ This means that the eigenvalues

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Lecture 7: September 17

Lecture 7: September 17 10-725: Optimization Fall 2013 Lecture 7: September 17 Lecturer: Ryan Tibshirani Scribes: Serim Park,Yiming Gu 7.1 Recap. The drawbacks of Gradient Methods are: (1) requires f is differentiable; (2) relatively

More information

Classical Numerical Methods to Solve Optimal Control Problems

Classical Numerical Methods to Solve Optimal Control Problems Lecture 26 Classical Numerical Methods to Solve Optimal Control Problems Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Necessary Conditions

More information

Optimal Control. Lecture 3. Optimal Control of Discrete Time Dynamical Systems. John T. Wen. January 22, 2004

Optimal Control. Lecture 3. Optimal Control of Discrete Time Dynamical Systems. John T. Wen. January 22, 2004 Optimal Control Lecture 3 Optimal Control of Discrete Time Dynamical Systems John T. Wen January, 004 Outline optimization of a general multi-stage discrete time dynamical systems special case: discrete

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn Review Taylor Series and Error Analysis Roots of Equations Linear Algebraic Equations Optimization Numerical Differentiation and Integration Ordinary Differential Equations Partial Differential Equations

More information

Translational and Rotational Dynamics!

Translational and Rotational Dynamics! Translational and Rotational Dynamics Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Copyright 217 by Robert Stengel. All rights reserved. For educational use only.

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 5: Gradient Descent Scribes: Loc Do,2,3 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

OPTIMIZING PERIAPSIS-RAISE MANEUVERS USING LOW-THRUST PROPULSION

OPTIMIZING PERIAPSIS-RAISE MANEUVERS USING LOW-THRUST PROPULSION AAS 8-298 OPTIMIZING PERIAPSIS-RAISE MANEUVERS USING LOW-THRUST PROPULSION Brenton J. Duffy and David F. Chichka This study considers the optimal control problem of maximizing the raise in the periapsis

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Numerical Optimal Control Overview. Moritz Diehl

Numerical Optimal Control Overview. Moritz Diehl Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize

More information

Math 409/509 (Spring 2011)

Math 409/509 (Spring 2011) Math 409/509 (Spring 2011) Instructor: Emre Mengi Study Guide for Homework 2 This homework concerns the root-finding problem and line-search algorithms for unconstrained optimization. Please don t hesitate

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

First-Order Low-Pass Filter!

First-Order Low-Pass Filter! Filters, Cost Functions, and Controller Structures! Robert Stengel! Optimal Control and Estimation MAE 546! Princeton University, 217!! Dynamic systems as low-pass filters!! Frequency response of dynamic

More information

Gliding, Climbing, and Turning Flight Performance Robert Stengel, Aircraft Flight Dynamics, MAE 331, 2018

Gliding, Climbing, and Turning Flight Performance Robert Stengel, Aircraft Flight Dynamics, MAE 331, 2018 Gliding, Climbing, and Turning Flight Performance Robert Stengel, Aircraft Flight Dynamics, MAE 331, 2018 Learning Objectives Conditions for gliding flight Parameters for maximizing climb angle and rate

More information

Gliding, Climbing, and Turning Flight Performance! Robert Stengel, Aircraft Flight Dynamics, MAE 331, 2016

Gliding, Climbing, and Turning Flight Performance! Robert Stengel, Aircraft Flight Dynamics, MAE 331, 2016 Gliding, Climbing, and Turning Flight Performance! Robert Stengel, Aircraft Flight Dynamics, MAE 331, 2016 Learning Objectives Conditions for gliding flight Parameters for maximizing climb angle and rate

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Mathematical optimization

Mathematical optimization Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the

More information

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725 Trust Region Methods Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 10-725/36-725 Trust Region Methods min p m k (p) f(x k + p) s.t. p 2 R k Iteratively solve approximations

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Finite Difference and Finite Element Methods

Finite Difference and Finite Element Methods Finite Difference and Finite Element Methods Georgy Gimel farb COMPSCI 369 Computational Science 1 / 39 1 Finite Differences Difference Equations 3 Finite Difference Methods: Euler FDMs 4 Finite Element

More information

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review 10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics

More information

Stochastic Optimal Control!

Stochastic Optimal Control! Stochastic Control! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2015 Learning Objectives Overview of the Linear-Quadratic-Gaussian (LQG) Regulator Introduction to Stochastic

More information

The Implicit Function Theorem with Applications in Dynamics and Control

The Implicit Function Theorem with Applications in Dynamics and Control 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition 4-7 January 2010, Orlando, Florida AIAA 2010-174 The Implicit Function Theorem with Applications in Dynamics

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

Problem 1 Cost of an Infinite Horizon LQR

Problem 1 Cost of an Infinite Horizon LQR THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2013-14) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

Recitation 1. Gradients and Directional Derivatives. Brett Bernstein. CDS at NYU. January 21, 2018

Recitation 1. Gradients and Directional Derivatives. Brett Bernstein. CDS at NYU. January 21, 2018 Gradients and Directional Derivatives Brett Bernstein CDS at NYU January 21, 2018 Brett Bernstein (CDS at NYU) Recitation 1 January 21, 2018 1 / 23 Initial Question Intro Question Question We are given

More information

MODERN CONTROL DESIGN

MODERN CONTROL DESIGN CHAPTER 8 MODERN CONTROL DESIGN The classical design techniques of Chapters 6 and 7 are based on the root-locus and frequency response that utilize only the plant output for feedback with a dynamic controller

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties

More information

Chapter 6: Derivative-Based. optimization 1

Chapter 6: Derivative-Based. optimization 1 Chapter 6: Derivative-Based Optimization Introduction (6. Descent Methods (6. he Method of Steepest Descent (6.3 Newton s Methods (NM (6.4 Step Size Determination (6.5 Nonlinear Least-Squares Problems

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Linear-Optimal State Estimation

Linear-Optimal State Estimation Linear-Optimal State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 218 Linear-optimal Gaussian estimator for discrete-time system (Kalman filter) 2 nd -order example

More information

Linear-Quadratic-Gaussian Controllers!

Linear-Quadratic-Gaussian Controllers! Linear-Quadratic-Gaussian Controllers! Robert Stengel! Optimal Control and Estimation MAE 546! Princeton University, 2017!! LTI dynamic system!! Certainty Equivalence and the Separation Theorem!! Asymptotic

More information

Optimal Control of Differential Equations with Pure State Constraints

Optimal Control of Differential Equations with Pure State Constraints University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 8-2013 Optimal Control of Differential Equations with Pure State Constraints Steven Lee

More information

Convergence of a Gauss Pseudospectral Method for Optimal Control

Convergence of a Gauss Pseudospectral Method for Optimal Control Convergence of a Gauss Pseudospectral Method for Optimal Control Hongyan Hou William W. Hager Anil V. Rao A convergence theory is presented for approximations of continuous-time optimal control problems

More information

Constrained Optimal Control I

Constrained Optimal Control I Optimal Control, Guidance and Estimation Lecture 34 Constrained Optimal Control I Pro. Radhakant Padhi Dept. o Aerospace Engineering Indian Institute o Science - Bangalore opics Motivation Brie Summary

More information

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Theory in Model Predictive Control : Constraint Satisfaction and Stability! Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time

More information

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error In this method we assume initial value of x, and substitute in the equation. Then modify x and continue till we

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Trajectory-based optimization

Trajectory-based optimization Trajectory-based optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 6 1 / 13 Using

More information