Dynamic Optimal Control!

Size: px
Start display at page:

Download "Dynamic Optimal Control!"

Transcription

1 Dynamic Optimal Control! Robert Stengel! Robotics and Intelligent Systems MAE 345, Princeton University, 2017 Learning Objectives Examples of cost functions Necessary conditions for optimality Calculation of optimal trajectories Design of optimal feedback control laws Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. 1 J = = Integrated Effect can be a Scalar Cost J = final time! 0 final time! 0 final range! 0 Time final time J = 1)dt! 0 Fuel fuel use per kilometer)dr Financial cost of time and fuel cost per hour)dt " cost " hour + cost " liter liter kilometer dr + * ) dt -, dt 2

2 Cost Accumulates from Start to Finish J = final time! fuel flow rate) dt 0 3 Optimal System Regulation Cost functions that penalize state deviations over a time interval: Quadratic scalar variation 1 J = lim T!" T 1 J = lim T!" T T T T x 2 )dt < " 0 Vector variation x T x)dt < " 0 Weighted vector variation 1 J = lim x T Qx)dt T!" T < " 0 No penalty for control use Why not use infinite control? 4

3 Cement Kiln 5 Pulp Paper Machines Machine length: ~ 2 football fields Paper speed " 2,200 m/min = 80 mph Maintain 3-D paper quality Avoid paper breaks at all cost! Paper-Making Machine Operation 6

4 Hazardous Waste Generated by Large Industrial Plants Cement dust Coal fly ash Metal emissions Dioxin Electroscrap and other hazardous waste Waste chemicals Ground water contamination Ancillary mining and logging issues Greenhouse gasses Need to optimize total cost - benefit of production processes including health/ environmental/regulatory cost) 7 Tradeoffs Between Performance and Control in Integrated Cost Function Trade performance against control usage Minimize a cost function that contains state and control r: relative importance of the two) 8

5 Dynamic Optimization:! The Optimal Control Problem min ut ) Minimize a scalar function, J, of terminal and integral costs J = min ut ) t f )! " xt f ) + L [ xt),ut) + ) ]dt, *) t o -) with respect to the control, ut), in t o,t f ), subject to a dynamic constraint dxt) dt = f[xt),ut)], xt o ) given dimx) = n x 1 dimf) = n x 1 dimu) = m x 1 9 Example of Dynamic Optimization Fuel Used, kg Optimal control profile Thrust, N Angle of Attack, deg Any deviation from optimal thrust and angle-of-attack profiles would increase total fuel used 10

6 t f Components of the Cost Function! L[ xt),ut) ]dt positive scalar function of two vectors t o Integral cost is a function of the state and control from start to finish L[ xt),ut) ]: Lagrangian of the cost function Terminal cost is a function of the state at the final time! " xt f ) positive scalar function of a vector 11 Components of the Cost Function Lagrangian examples L[ xt),ut) ] = 1 dm dt d dt 1 2 xt t)qxt) + ut)rut)!" Terminal cost examples xt f ) x Goal! " xt f ) = ) 1 " 2 xt ) x 2 ) f Goal *) 12

7 Example: Dynamic Model of Infection and Immune Response x 1 = Concentration of a pathogen, which displays antigen x 2 = Concentration of plasma cells, which are carriers and producers of antibodies x 3 = Concentration of antibodies, which recognize antigen and kill pathogen x 4 = Relative characteristic of a damaged organ [0 = healthy, 1 = dead] 13 Natural Response to Pathogen Assault No Therapy) 14

8 Cost Function Considers Infection, Organ Health, and Drug Usage min ut ) J = min ut ) = min u t f )! " xt f ) + L [ xt),ut) + ) ]dt, *) t o -) 1 2 s x f + s 44 x 4 f ) + 1 t " f. q 11 x q 44 x ru 2 )dt 2 /. t o / Tradeoffs between final values, integral values over a fixed time interval, state, and control Cost function includes weighted square values of! Final concentration of the pathogen! Final health of the damaged organ 0 is good, 1 is bad)! Integral of pathogen concentration! Integral health of the damaged organ 0 is good, 1 is bad)! Integral of drug usage Drug cost may reflect physiological or financial cost 15 Necessary Conditions for Optimal Control! 16

9 Augment the Cost Function Must express sensitivity of the cost to the dynamic response Adjoin dynamic constraint to integrand using Lagrange multiplier, t)!same dimension as the dynamic constraint, [n x 1]!Constraint = 0 when the dynamic equation is satisfied t f J =! " xt f ) + * L [ xt),ut) ]+ T " t) f[xt),ut)] dxt) dt )., / dt t o Optimization goal is to minimize J with respect to ut) in t o,t f ), min ut ) t f J = J* =! " x *t ) f + * L[ x *t),u*t)]+ " 0 + *T t) f[x *t),u*t)], t o dx *t) dt - ). / dt 17 min ut ) Substitute the Hamiltonian! in the Cost Function Substitute the Hamiltonian in the cost function t f J =! " xt f ) + H [ xt),ut),t) ] T t) dxt) +. ), dt * dt - t o The optimal cost, J*, is produced by the optimal histories of state, control, and Lagrange multiplier J = J* =! " x *t ) f Define Hamiltonian, H[.] H x,u,!)! Lx,u) +! T f x,u) t f dx *t) + H [ x *t),u*t), *t)] * T +. ) t), dt * dt - t o 18

10 Integration by Parts Scalar indefinite integral " u dv = uv! Vector definite integral " vdu u =! T t) dv =!xt)dt = dx t f "! T t) dxt) dt =! T t f d! T t) t)xt) dt t0 " xt)dt dt t 0 t f t 0 19 The Optimal Control Solution Along the optimal trajectory, the cost, J*, should be insensitive to small variations in control policy To first order,, "!J* = " x )/ - T * !x!u) + T!x!u)) * t=to t=t f t f," H + " u!u + " H " x + dt ) - dt +.2 *!x!u) / dt = 0 t o!x!u) is arbitrary perturbation in state due to perturbation in control over the time interval, t 0,t f ). Setting!J* = 0 leads to three necessary conditions for optimality 20

11 Three Conditions for Optimality Individual terms should remain zero for arbitrary variations in!x t ) and!u t) Solution for Lagrange Multiplier 1) 2)!"!x T ) * = 0 t =t f! H! x + d"t dt ) = 0 in t 0,t f! " * t) in t o,t f ) 3)!H!u = 0 in t,t 0 f ) Insensitivity to Control Variation! " u * t) in t o,t f ) 21 Iterative Numerical Optimization Using Steepest-Descent Forward solution to find the state, xt) Backward solution to find the Lagrange multiplier, Steepest-descent adjustment of control history, ut) Iterate to find the optimal solution!x k t) = f[x k t),u k!1 t)], with xt o ) given ) u k!1 t) prescribed in t o,t f k = Iteration index )! t Use educated guess for ut) on first iteration 22

12 Numerical Optimization Using Steepest-Descent Forward solution to find the state, xt) Backward solution to find the Lagrange multiplier, Steepest-descent adjustment of control history, ut) Iterate to optimal solution )! t! k t f ) = "[x kt f )] " x ) d! k t) dt * = 0 " H x k,u k,! k, + " x T ) - /. *, +, * = 0, " L x,u) +! T k t, " x xt )=x k t ) + ut )=u k01 t ) k T Boundary condition at final time Calculated from terminal value of the state ) " f x,u) " x xt )=x k t ) ut )=u k01 t ) - / /. k T - /./ 23 Numerical Optimization Using Steepest-Descent Forward solution to find the state, xt) Backward solution to find the Lagrange multiplier,! t) Steepest-descent adjustment of control history, ut) Iterate to optimal solution u k t) = u k!1 t)! " H T u ) k = u k!1 t)! " L u xt )=x k t ) ut )=u k!1 t ) + * T k t) f u xt )=x k t ) ut )=u k!1 t )! : Steepest-descent gain 24 ) ) T

13 Optimal Treatment of an Infection! 25 Dynamic Model for the Infection Treatment Problem dxt) dt = f[xt),ut)], xt o ) given Nonlinear Dynamics of Innate Immune Response and Drug Effect!x 1 = a 11! a 12 x 3 )x 1 + b 1 u 1!x 2 = a 21 x 4 )a 22 x 1 x 3! a 23 x 2! x 2 *) + b 2 u 2!x 3 = a 31 x 2! a 32 + a 33 x 1 )x 3 + b 3 u 3!x 4 = a 41 x 1! a 42 x 4 + b 4 u 4 26

14 Optimal Treatment with Four Drugs separately) Unit Cost Weights J = 1 2 s 11x f + s 44 x 4 f ) t f! t o q 11 x q 44 x ru 2 )dt 27 Increased Cost of Drug Use Control Cost Weight =

15 Accounting for Uncertainty in Initial Condition! 29 Account for Uncertainty in Initial Condition and Unknown Disturbances Nominal, Open-Loop Optimal Control u*t) = u opt t) Neighboring-Optimal Feedback) Control ut) = u*t) +!ut)!ut) = "Ct)[ xt) " x *t)] 30

16 Neighboring-Optimal Control Linearize dynamic equation!xt) =!x *t) +!!xt) = f{[x *t) +!xt)],[u*t) +!ut)]]} " f[x *t),u*t)]+ Ft)!xt) + Gt)!ut) Nominal optimal control history Optimal perturbation control Sum the two for neighboringoptimal control u * t) = u opt t)!ut) = "Ct) xt) " x opt t) ut) = u opt t) +!ut) 31 Optimal Feedback Gain, Ct) Solution of Euler-Lagrange equations for! Linear dynamic system! Quadratic cost function leads to linear, time-varying LTV) optimal feedback controller!u * t) = "C * t)!xt) where C*t) = R!1 G T t)st)!st) =!F T t)st)! St)Ft) + St)Gt)R!1 G T t)st)! Q Matrix Riccati equation see Supplemental Material for derivation) St f ) = S f 32

17 50 Increased Initial Infection and Scalar Neighboring-Optimal Control u 1 ) ut) = u*t)! Ct)"xt) = u*t)! Ct)[xt)! x *t)] 33 Optimal, Constant Gain Feedback Control for Linear, Time-Invariant Systems! 34

18 Linear-Quadratic LQ) Optimal Control Law!!xt) = F t)!x t) + G t)!u t)!!xt) = F t)!x t) + G t)!u C t) " C* t )!x t) 35 Optimal Control for! Linear, Time-Invariant Dynamic Process Original system is linear and time-invariant LTI)!!xt) = F!xt) + G!ut),!x 0) given Minimize quadratic cost function for t f -> Terminal cost is of no concern 1 min J = J* = lim u t f!" 2 t f x * T t)qx *t) + u* T t)ru*t) dt 0 Dynamic constraint is the linear, time-invariant LTI) plant 36

19 Linear-Quadratic LQ) Optimal Control for LTI System, and!s *0)! t f!" 0 Steady-state solution of the matrix Riccati equation = Algebraic Riccati Equation!F T S *!S * F + S * G * R!1 G T S *!Q = 0 Optimal control gain matrix C* = R!1 G T S* m " n) = m " m ) m " n) n " n) Optimal control!ut) = "C*! xt) t f! " MATLAB function: lqr 37 Example: Open-Loop Stable and Unstable Second-Order System Response to Initial Condition Stable Eigenvalues = i i Unstable Eigenvalues = i i 38

20 Example: LQ Regulator Stabilizes Unstable System, r = 1 and 100 min!u J = min!u 1 2 " 0!x 2 1 +!x r!u 2 )dt )!ut) = " c 1 c 2!x 1 t)!x 2 t) = "c 1!x 1 t) " c 2!x 2 t) r = 1 Control Gain C*) = r = 100 Control Gain C*) = Riccati Matrix S*) = Riccati Matrix S*) = Closed-Loop Eigenvalues = Closed-Loop Eigenvalues = j j 39 Example: LQ Regulator Stabilizes Unstable System, r = 1 and

21 Requirements for Guaranteeing Stability of the LQ Regulator!!xt) = F!xt) + G!ut) = [ F " GC]!xt) Closed-loop system is stable whether or not open-loop system is stable if... Q > 0 R > 0... and F,G) is a controllable pair Rank " G FG! F n!1 G = n 41 Next Time:! Formal Logic, Algorithms, and Incompleteness! 42

22 Supplementary Material! 43 Linearized Model of Infection Dynamics Locally linearized time-varying) dynamic equation "!!x 1!!x 2!!x 3!!x 4 " = a 11 a 12 x 3 *) 0 a 12 x 1 * 0 a 21 x 4 *)a 22 x 3 * a 23 a 21 x 4 *)a 22 x 1 * ) a 21 ) x 4 a 22 x 1 * x 3 * a 33 x 3 * a 31 a 31 x 1 * 0 a a 42 " b b b b 4 "!u 1!u 2!u 3!u 4 " +!w 1!w 2!w 3!w 4 "!x 1!x 2!x 3!x 4 44

23 Expand Optimal Control Function Expand optimized cost function to second degree { }! J [ x *t o ) +!xt o )]," x *t f ) +!xt f ) J * " x *t o ),x *t f ) +!J "!xt ),!xt ) o f +!2 J "!xt o ),!xt f ) = J *! " x *t o ),x *t f ) + 2 J! "xt o ),xt f ) as First Variation, J! "xt o ),xt f ) = 0 Nominal optimized cost, plus nonlinear dynamic constraint t f t o [ ] J *! " x *t o ),x *t f ) =! " x *t ) f + L x *t),u*t) dt subject to nonlinear dynamic equation!x *t) = f [ x *t),u*t)], xt o ) = x o 45 Second Variation of the Cost Function Objective: Minimize second-variational cost subject to linear dynamic constraint min!u!2 J = 1 2!xT t f )" xx t f )!xt f ) + 1 * t f, L!x T t)!u T t) xx t) L xu t)!xt)., + 2 ) dt / L ux t) L uu t)!ut) t -, o 0, subject to perturbation dynamics!!xt) = Ft)!xt) + Gt)!ut),!xt o ) =!x o Cost weighting matrices expressed as Qt) M T t) St f )!! xx t f ) = "2! "x 2 t f ) Mt) Rt)! L t) L t) xx xu L ux t) L uu t) dim! " St f ) = dim Qt) dim Rt) [ ] = m m [ ] = n m dim Mt) [ ] = n n 46

24 Second Variational Hamiltonian Variational cost function! 2 J = 1 2!xT t f )St f )!xt f ) + 1 ) t f + * " 2 t,+ o!x T t)!u T t) " Qt) M T t) Mt) Rt) "!xt)!ut) - dt +. /+ = 1 2!xT t f )St f )!xt f ) + 1 ) 2 *) t f t o + "!x T t)qt)!xt) + 2!x T t)mt)!ut) +!u T ) t)rt)!ut) dt, -) Variational Lagrangian plus adjoined dynamic constraint H!xt),!ut),!" t) = L [!xt),!ut)]+!" T t)f [!xt),!ut)] = 1 2!xT t)qt)!xt) + 2!x T t)mt)!ut) +!u T t)rt)!ut) +!" T t) [ Ft)!xt) + Gt)!ut) ] 47!! " t ) = Second Variational Euler- Lagrange Equations H = 1 2 "!xt t)qt)!xt) + 2!x T t)mt)!ut) +!u T t)rt)!ut)!" t f +! T t) [ Ft)!xt) + Gt)!ut) ] Terminal condition, solution for adjoint vector, and optimality condition ) = xx t f )!xt f ) = St f )!xt f ) ) ) + H!xt),!ut),!" t -+ *.,+ x /+ ) *!H "xt),"ut)," t,* ) - +*!u.* T T = Qt)!xt) Mt)!ut) F T t)!" t) = M T t)"xt) + Rt)"ut) / G T t)" t) = 0 48

25 Use Control Law to Solve the Two- Point Boundary-Value Problem From H u = 0!ut) = "R "1 t) M T t)!xt) + G T t)! t) Substitute for control in system and adjoint equations Two-point boundary-value problem!!xt) )!! " t * Ft) ) Gt)R )1 t)m T t) )Gt)R )1 t)g T t) =, + )Qt) + Mt)R )1 t)m T t) ) Ft) ) Gt)R )1 t)m T T, t) -., /, 0!xt) )!" t Boundary conditions at initial and final times!xt o ) )!" t f =!x o S f!x f Perturbation state vector Perturbation adjoint vector 49 Use Control Law to Solve the Two- Point Boundary-Value Problem Suppose that the terminal adjoint relationship applies over the entire interval!" t) = S t)!x t) Feedback control law becomes!ut) = "R "1 t) M T t)!xt) + G T t)s t)!x t) ) ) = "R "1 t) M T t) + G T t)s t!x t! "Ct)!x t) dim C) = m! n 50

26 Linear-Quadratic LQ) Optimal Control Gain Matrix!ut) = "Ct)!x t) Optimal feedback gain matrix Ct) = R!1 t)" G T t)s t) + M T t) Properties of feedback gain matrix! Full state feedback m x n)! Time-varying matrix! R, G, and M given Control weighting matrix, R State-control weighting matrix, M Control effect matrix, G! St) remains to be determined 51 Solution for the Adjoining Matrix, St) Time-derivative of adjoint vector!"! t) =!S t)!x t) + S t)!!x t) Rearrange!S t)!x t) =!"! t) S t)!!x t) Recall!!xt) )!! " t * Ft) ) Gt)R )1 t)m T t) )Gt)R )1 t)g T t) =, + )Qt) + Mt)R )1 t)m T t) ) Ft) ) Gt)R )1 t)m T T, t) -., /, 0!xt) )!" t 52

27 Solution for the Adjoining Matrix, St)!S t)!x t) = "Qt) + Mt)R "1 t)m T t)!x t) " Ft) " Gt)R "1 t)m T T t)! t ) " S t Substitute {!x t) " Gt)R "1 t)g T t)! t) } ) Ft) " Gt)R "1 t)m T t) Substitute!" t) = S t)!x t)!s t)!x t) = "Qt) + Mt)R "1 t)m T t)!x t) " Ft) " Gt)R "1 t)m T T t) S t)!x t) " S t {!x t) " Gt)R "1 t)g T t)s t)!x t) } ) Ft) " Gt)R "1 t)m T t) xt) can be eliminated 53 Matrix Riccati Equation for St) The result is a nonlinear, ordinary differential equation for St), with terminal boundary conditions!s t) = "!Qt) + Mt)R!1 t)m T t)! Ft)! Gt)R!1 t)m T T " t) S t)!s t) " Ft)! Gt)R!1 t)m T t) + S t)gt)r!1 t)g T t)s t) ) = xx t f ) S t f Characteristics of the Riccati matrix, St)! St f ) is symmetric, n x n, and typically positive semi-definite! Matrix Riccati equation is symmetric! Therefore, St) is symmetric and positive semi-definite throughout Once St) has been determined, optimal feedback control gain matrix, Ct) can be calculated 54

28 Neighboring-Optimal LQ) Feedback Control Law Full state is fed back to all available controls!ut) = "R "1 t) M T t) + G T t)s t)!x t) = "Ct)!x t) Optimal control history plus feedback correction ut) = u *t)! Ct)"x t ) = u *t)! Ct) x t)! x * t) 55 ut) = u *t)! Ct)"x t Nonlinear System with Neighboring-Optimal Feedback Control Nonlinear dynamic system!xt) = f!" x t),ut) Neighboring-optimal control law ) = u * t)! Ct) x t)! x * t) Nonlinear dynamic system with neighboring-optimal feedback control { }!xt) = f x t), " u * t)! Ct) " x t)! x * t) 56

29 Example: Response of Stable Second- Order System to Random Disturbance Eigenvalues = , Example: Disturbance Response of Unstable System with LQ Regulators, r = 1 and

30 Equilibrium Response to a Command Input! 59 Steady-State Response to Commands!!xt) = F!xt) + G!ut) + L!wt),!xt o ) given!yt) = H x!xt) + H u!ut) + H w!wt) State equilibrium with constant inputs... 0 = F!x *+G!u*+L!w * )!x* = "F "1 G!u*+L!w *... constrained by requirement to satisfy command input!y* = H x!x * +H u!u * +H w!w * 60

31 Steady-State Response to Commands Equilibrium that satisfies a commanded input, y C 0 = F!x *+G!u*+L!w *!y* = H x!x *+H u!u*+h w!w * Combine equations " 0!y C = " F H x G H u "!x *!u * " + L H w!w * n + r) x n + m) 61 Equilibrium Values of State and Control to Satisfy Commanded Input Equilibrium that satisfies a commanded input, y C "!x *!u* " = F H x G H u 1 " " L!w *! A 1!y C H w!w * L!w *!y C H w!w * A must be square for inverse to exist Then, number of commands = number of controls 62

32 Inverse of the Matrix! " F H x G H u 1!! A 1 = B = " B 11 B 12 B 21 B 22 "!x *!u* " = B 11 B 12 B 21 B 22 B ij have same dimensions as equivalent blocks of A Equilibrium that satisfies a commanded input, y C " L!w *!y C H w!w * ) )!x* = "B 11 L!w * +B 12!y C " H w!w *!u* = "B 21 L!w * +B 22!y C " H w!w * 63 Elements of Matrix Inverse and Solutions for Open-Loop Equilibrium Substitution and elimination see Supplement)! " B 11 B 12 B 21 B 22 =! " F 1 GB 21 + I n ) F 1 GB 22 B 22 H x F 1 H x F 1 G + H u ) 1 Solve for B 22, then B 12 and B 21, then B 11!x* = B 12!y C " B 11 L + B 12 H w )!w *!u* = B 22!y C " B 21 L + B 22 H w )!w * 64

33 LQ Regulator with Command Input Proportional Control Law)!ut) =!u C t) " C!x t) How do we define u C t)? 65 Non-Zero Steady-State Regulation with LQ Regulator Command input provides equivalent state and control values for the LQ regulator Control law with command input ) "!x * t) ) " B 12!y * )!ut) =!u*t) " C!x t = B 22!y *"C!x t = B 22 + CB 12 )!y *"C!x t 66

34 LQ Regulator with Forward Gain Matrix ) "!x * t) )!ut) =!u*t) " C!x t = C F!y *"C B!x t C F! B 22 + CB 12 C B! C!! Disturbance affects the system, whether or not it is measured!! If measured, disturbance effect of can be countered by C D analogous to C F ) 67

Minimization with Equality Constraints

Minimization with Equality Constraints Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2015 State and Control Equality Constraints Pseudoinverse State and Control Inequality

More information

Minimization with Equality Constraints!

Minimization with Equality Constraints! Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 State and Control Equality Constraints Pseudoinverse State and Control Inequality

More information

Time-Invariant Linear Quadratic Regulators!

Time-Invariant Linear Quadratic Regulators! Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015 Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

Stochastic and Adaptive Optimal Control

Stochastic and Adaptive Optimal Control Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal

More information

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries! Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2018 Copyright 2018 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html

More information

First-Order Low-Pass Filter!

First-Order Low-Pass Filter! Filters, Cost Functions, and Controller Structures! Robert Stengel! Optimal Control and Estimation MAE 546! Princeton University, 217!! Dynamic systems as low-pass filters!! Frequency response of dynamic

More information

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries! Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2017 Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html

More information

Stochastic Optimal Control!

Stochastic Optimal Control! Stochastic Control! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2015 Learning Objectives Overview of the Linear-Quadratic-Gaussian (LQG) Regulator Introduction to Stochastic

More information

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors Time Response of Dynamic Systems Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Multi-dimensional trajectories Numerical integration Linear and nonlinear systems Linearization

More information

Linearized Equations of Motion!

Linearized Equations of Motion! Linearized Equations of Motion Robert Stengel, Aircraft Flight Dynamics MAE 331, 216 Learning Objectives Develop linear equations to describe small perturbational motions Apply to aircraft dynamic equations

More information

Introduction to Optimization!

Introduction to Optimization! Introduction to Optimization! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2017 Optimization problems and criteria Cost functions Static optimality conditions Examples

More information

Linear-Quadratic-Gaussian Controllers!

Linear-Quadratic-Gaussian Controllers! Linear-Quadratic-Gaussian Controllers! Robert Stengel! Optimal Control and Estimation MAE 546! Princeton University, 2017!! LTI dynamic system!! Certainty Equivalence and the Separation Theorem!! Asymptotic

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the

More information

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only.

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. Control Systems Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 2017 Analog vs. digital systems Continuous- and Discretetime Dynamic Models Frequency Response Transfer Functions

More information

Suppose that we have a specific single stage dynamic system governed by the following equation:

Suppose that we have a specific single stage dynamic system governed by the following equation: Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

Homework Solution # 3

Homework Solution # 3 ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

MODERN CONTROL DESIGN

MODERN CONTROL DESIGN CHAPTER 8 MODERN CONTROL DESIGN The classical design techniques of Chapters 6 and 7 are based on the root-locus and frequency response that utilize only the plant output for feedback with a dynamic controller

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems Spectral Properties of Linear- Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018! Stability margins of single-input/singleoutput (SISO) systems! Characterizations

More information

Linear Quadratic Optimal Control Topics

Linear Quadratic Optimal Control Topics Linear Quadratic Optimal Control Topics Finite time LQR problem for time varying systems Open loop solution via Lagrange multiplier Closed loop solution Dynamic programming (DP) principle Cost-to-go function

More information

Module 05 Introduction to Optimal Control

Module 05 Introduction to Optimal Control Module 05 Introduction to Optimal Control Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html October 8, 2015

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for

More information

Formula Sheet for Optimal Control

Formula Sheet for Optimal Control Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming

More information

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant How to control the thermal power plant in order to ensure the stable operation of the plant? In the assignment Production

More information

Nonlinear State Estimation! Extended Kalman Filters!

Nonlinear State Estimation! Extended Kalman Filters! Nonlinear State Estimation! Extended Kalman Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Deformation of the probability distribution!! Neighboring-optimal

More information

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer

More information

Problem 1 Cost of an Infinite Horizon LQR

Problem 1 Cost of an Infinite Horizon LQR THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework

More information

First-Order Low-Pass Filter

First-Order Low-Pass Filter Filters, Cost Functions, and Controller Structures Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 218! Dynamic systems as low-pass filters! Frequency response of dynamic systems!

More information

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4. Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation

More information

Translational and Rotational Dynamics!

Translational and Rotational Dynamics! Translational and Rotational Dynamics Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Copyright 217 by Robert Stengel. All rights reserved. For educational use only.

More information

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

CDS 110b: Lecture 2-1 Linear Quadratic Regulators CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,

More information

Linear-Quadratic Control System Design!

Linear-Quadratic Control System Design! Linear-Quadratic Control Sytem Deign! Robert Stengel! Optimal Control and Etimation MAE 546! Princeton Univerity, 217!! Control ytem configuration!! Proportional-integral!! Proportional-integral-filtering!!

More information

Alternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion

Alternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion Linearized Longitudinal Equations of Motion Robert Stengel, Aircraft Flig Dynamics MAE 33, 008 Separate solutions for nominal and perturbation flig paths Assume that nominal path is steady and in the vertical

More information

Lecture 4 Continuous time linear quadratic regulator

Lecture 4 Continuous time linear quadratic regulator EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon

More information

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function: Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Hamilton-Jacobi-Bellman Equation Feb 25, 2008 Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS

More information

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University .. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

ECSE.6440 MIDTERM EXAM Solution Optimal Control. Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004

ECSE.6440 MIDTERM EXAM Solution Optimal Control. Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004 ECSE.6440 MIDTERM EXAM Solution Optimal Control Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004 This is a take home exam. It is essential to SHOW ALL STEPS IN YOUR WORK. In NO circumstance is

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Lecture 2: Discrete-time Linear Quadratic Optimal Control ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon

More information

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Theory in Model Predictive Control : Constraint Satisfaction and Stability! Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time

More information

Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018

Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018 Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018 Course Overview Introduction to Flight Dynamics Math Preliminaries Copyright 2018 by Robert Stengel. All rights reserved. For

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Chap 4. State-Space Solutions and

Chap 4. State-Space Solutions and Chap 4. State-Space Solutions and Realizations Outlines 1. Introduction 2. Solution of LTI State Equation 3. Equivalent State Equations 4. Realizations 5. Solution of Linear Time-Varying (LTV) Equations

More information

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System Australian Journal of Basic and Applied Sciences, 3(4): 4267-4273, 29 ISSN 99-878 Modern Control Design of Power System Atef Saleh Othman Al-Mashakbeh Tafila Technical University, Electrical Engineering

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Aircraft Flight Dynamics!

Aircraft Flight Dynamics! Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2016 Course Overview Introduction to Flight Dynamics Math Preliminaries Copyright 2016 by Robert Stengel. All rights reserved. For

More information

Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field

Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field Vol. 4, No. 4, 23 Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field Ashraf H. Owis Department of Astronomy, Space and Meteorology, Faculty of Science, Cairo University Department

More information

Linear Parameter Varying and Time-Varying Model Predictive Control

Linear Parameter Varying and Time-Varying Model Predictive Control Linear Parameter Varying and Time-Varying Model Predictive Control Alberto Bemporad - Model Predictive Control course - Academic year 016/17 0-1 Linear Parameter-Varying (LPV) MPC LTI prediction model

More information

STATE VARIABLE (SV) SYSTEMS

STATE VARIABLE (SV) SYSTEMS Copyright F.L. Lewis 999 All rights reserved Updated:Tuesday, August 05, 008 STATE VARIABLE (SV) SYSTEMS A natural description for dynamical systems is the nonlinear state-space or state variable (SV)

More information

LINEAR QUADRATIC GAUSSIAN

LINEAR QUADRATIC GAUSSIAN ECE553: Multivariable Control Systems II. LINEAR QUADRATIC GAUSSIAN.: Deriving LQG via separation principle We will now start to look at the design of controllers for systems Px.t/ D A.t/x.t/ C B u.t/u.t/

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department

More information

Optimal control of nonlinear systems with input constraints using linear time varying approximations

Optimal control of nonlinear systems with input constraints using linear time varying approximations ISSN 1392-5113 Nonlinear Analysis: Modelling and Control, 216, Vol. 21, No. 3, 4 412 http://dx.doi.org/1.15388/na.216.3.7 Optimal control of nonlinear systems with input constraints using linear time varying

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

EE221A Linear System Theory Final Exam

EE221A Linear System Theory Final Exam EE221A Linear System Theory Final Exam Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2016 12/16/16, 8-11am Your answers must be supported by analysis,

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

Optimal Control Theory

Optimal Control Theory Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system

More information

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory Constrained Innite-time Optimal Control Donald J. Chmielewski Chemical Engineering Department University of California Los Angeles February 23, 2000 Stochastic Formulation - Min Max Formulation - UCLA

More information

Robust Control of Robot Manipulator by Model Based Disturbance Attenuation

Robust Control of Robot Manipulator by Model Based Disturbance Attenuation IEEE/ASME Trans. Mechatronics, vol. 8, no. 4, pp. 511-513, Nov./Dec. 2003 obust Control of obot Manipulator by Model Based Disturbance Attenuation Keywords : obot manipulators, MBDA, position control,

More information

Video 6.1 Vijay Kumar and Ani Hsieh

Video 6.1 Vijay Kumar and Ani Hsieh Video 6.1 Vijay Kumar and Ani Hsieh Robo3x-1.6 1 In General Disturbance Input + - Input Controller + + System Output Robo3x-1.6 2 Learning Objectives for this Week State Space Notation Modeling in the

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

1 Steady State Error (30 pts)

1 Steady State Error (30 pts) Professor Fearing EECS C28/ME C34 Problem Set Fall 2 Steady State Error (3 pts) Given the following continuous time (CT) system ] ẋ = A x + B u = x + 2 7 ] u(t), y = ] x () a) Given error e(t) = r(t) y(t)

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller

More information

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73 CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly

More information

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem. Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 217 by James B. Rawlings Outline 1 Linear

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Optimal Control. McGill COMP 765 Oct 3 rd, 2017 Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps

More information

LECTURE NOTES IN CALCULUS OF VARIATIONS AND OPTIMAL CONTROL MSc in Systems and Control. Dr George Halikias

LECTURE NOTES IN CALCULUS OF VARIATIONS AND OPTIMAL CONTROL MSc in Systems and Control. Dr George Halikias Ver.1.2 LECTURE NOTES IN CALCULUS OF VARIATIONS AND OPTIMAL CONTROL MSc in Systems and Control Dr George Halikias EEIE, School of Engineering and Mathematical Sciences, City University 4 March 27 1. Calculus

More information

6. Linear Quadratic Regulator Control

6. Linear Quadratic Regulator Control EE635 - Control System Theory 6. Linear Quadratic Regulator Control Jitkomut Songsiri algebraic Riccati Equation (ARE) infinite-time LQR (continuous) Hamiltonian matrix gain margin of LQR 6-1 Algebraic

More information

moments of inertia at the center of gravity (C.G.) of the first and second pendulums are I 1 and I 2, respectively. Fig. 1. Double inverted pendulum T

moments of inertia at the center of gravity (C.G.) of the first and second pendulums are I 1 and I 2, respectively. Fig. 1. Double inverted pendulum T Real-Time Swing-up of Double Inverted Pendulum by Nonlinear Model Predictive Control Pathompong Jaiwat 1 and Toshiyuki Ohtsuka 2 Abstract In this study, the swing-up of a double inverted pendulum is controlled

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive

More information

Cyber-Physical Systems Modeling and Simulation of Continuous Systems

Cyber-Physical Systems Modeling and Simulation of Continuous Systems Cyber-Physical Systems Modeling and Simulation of Continuous Systems Matthias Althoff TU München 29. May 2015 Matthias Althoff Modeling and Simulation of Cont. Systems 29. May 2015 1 / 38 Ordinary Differential

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

EML5311 Lyapunov Stability & Robust Control Design

EML5311 Lyapunov Stability & Robust Control Design EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.

More information

Static and Dynamic Optimization (42111)

Static and Dynamic Optimization (42111) Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:

More information

Optimization of Linear Systems of Constrained Configuration

Optimization of Linear Systems of Constrained Configuration Optimization of Linear Systems of Constrained Configuration Antony Jameson 1 October 1968 1 Abstract For the sake of simplicity it is often desirable to restrict the number of feedbacks in a controller.

More information

Optimization-Based Control. Richard M. Murray Control and Dynamical Systems California Institute of Technology

Optimization-Based Control. Richard M. Murray Control and Dynamical Systems California Institute of Technology Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology Version v2.1b (2 Oct 218) c California Institute of Technology All rights reserved. This manuscript

More information

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018 Linear System Theory Wonhee Kim Lecture 1 March 7, 2018 1 / 22 Overview Course Information Prerequisites Course Outline What is Control Engineering? Examples of Control Systems Structure of Control Systems

More information

Game Theory Extra Lecture 1 (BoB)

Game Theory Extra Lecture 1 (BoB) Game Theory 2014 Extra Lecture 1 (BoB) Differential games Tools from optimal control Dynamic programming Hamilton-Jacobi-Bellman-Isaacs equation Zerosum linear quadratic games and H control Baser/Olsder,

More information

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0). Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D

More information

9 Controller Discretization

9 Controller Discretization 9 Controller Discretization In most applications, a control system is implemented in a digital fashion on a computer. This implies that the measurements that are supplied to the control system must be

More information

Modeling and Control Overview

Modeling and Control Overview Modeling and Control Overview D R. T A R E K A. T U T U N J I A D V A N C E D C O N T R O L S Y S T E M S M E C H A T R O N I C S E N G I N E E R I N G D E P A R T M E N T P H I L A D E L P H I A U N I

More information

Pontryagin s maximum principle

Pontryagin s maximum principle Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin

More information