ECE 680 Fall Test #2 Solutions. 1. Use Dynamic Programming to find u(0) and u(1) that minimize. J = (x(2) 1) u 2 (k) x(k + 1) = bu(k),
|
|
- Gwenda Bell
- 5 years ago
- Views:
Transcription
1 ECE 68 Fall 211 Test #2 Solutions 1. Use Dynamic Programming to find u() and u(1) that minimize subject to 1 J (x(2) 1) u 2 (k) k x(k + 1) bu(k), where b. Let J (x(k)) be the minimum cost of transfer a state x(k) to some final state x(2). Then, and J (x(1)) min u(1) J (x(2)) (x(2) 1) 2 (bu(1) 1) 2, ( 2u 2 (1) + J (x(2)) ) ( min 2u 2 (1) + (bu(1) 1) 2). u(1) There are no constraints on u(1). Therefore, we compute u (1) by solving J(x(1)) u(1). We have Hence, J(x(1)) u(1) 4u(1) + 2b(bu(1) 1). u (1) Substituting the above into J (x(1)) yields Continuing, we obtain J (x()) min u() b 2 + b 2 J (x(1)) b 2. ( 2u 2 () + J (x(1)) ) ( min 2u 2 () + 2 ). u() 2 + b 2 1
2 Hence, u () 2. Use the HJB equation to find u that minimizes J ( 3x 2 + u 2) dt subject to Assume ẋ x + u. J 1 2 p(t)x2, and p(t) ẇ(t) w(t). Then, solve for w(t), find a general solution for p(t). Eliminate the integration constants from p(t), and write down the expression for u as a time-varying state-feedback controller. The Hamiltonian function is H 1 ( 3x 2 + u 2) + J ( x + u). 2 x There are no constraints on u, so we compute u by solving H u. We have Hence H u u + J x. u J x. 2
3 Since 2 H 1 >, u is a strict minimizer of H. We substitute u into the HJB u 2 equation to get J t x2 + 1 ( ) J 2 J ( ) J 2 2 x x x. x Simplifying yields We assume J t x2 1 2 ( ) J 2 J x J 1 2 p(t)x2. x x. Hence, J t 1 2ṗx2 and J x px. Substituting the above into the HJB equation, we obtain 1 2ṗx x2 1 2 p2 x 2 px 2 Thus, we have to solve the equation We assume that Hence, ( 1 2ṗ p2 p) x 2. ṗ + 3 p 2 2p. (1) ṗ Substituting (2) and (3) into (1) yields ẅw + ẇ 2 + 3w 2 ẇ 2 + 2ẇw w 2 Thus, we have to solve the differential equation The characteristic equation of (4) is p ẇ w. (2) ẅw + ẇ2 w 2. (3) ( ẅ + 3w + 2ẇ)w w 2. ẅ 2ẇ 3w. (4) λ 2 2λ 3 (λ + 1)(λ 3). 3
4 Hence, and w C 1 e t + C 2 e 3t, ẇ C 1 e t + 3C 2 e 3t. Substituting the above two expressions into (2) yields The boundary condition is p(1). Hence, p(t) C 1e t + 3C 2 e 3t C 1 e t + C 2 e 3t. (5) C 1 e 1 + 3C 2 e 3, that is, C 1 3C 2 e 4. (6) We substitute (6) into (5) to get p(t) 3C 2e 4 e t 3C 2 e 3t 3C 2 e 4 e t + C 2 e 3t 3 (e4 t e 3t ) 3e 4 t + e 3t. Hence, the optimal state-feedback control law has the form u J x px 3 (e4 t e 3t ) x. 3e 4 t + e 3t 3. Determine the optimal state-feedback controller, u k(t)x, that minimizes J 1 2 x2 (1) + 1 u 2 dt subject to ẋ u. 4
5 We form the Hamiltonian matrix H A BR 1 B Q A 1/2. We next find the inverse of the characteristic matrix of the Hamiltonian matrix, e Ht L ( 1 [si H] 1) L 1 s 1/2 1 s L t s 2s 2 1 s. We next compute e H(1 t) 1 1 (1 t) 2 1 Φ 11 Φ 12 Φ 21 Φ 22. We have, p(1) Fx(1) x(1). Hence, p P(t)x Therefore, (Φ 22 FΦ 12 ) 1 (FΦ 11 Φ 21 )x ( ) 1 2 (1 t) x 2 3 t x u R 1 B p t x 1 3 t x 5
6 4. For the continuous model, construct ẋ 1 x + 1 u, (i) (5 pts) the Euler discrete model for the sampling period h 1/2; (ii) (1 pts) the exact discrete model for the sampling period h 1/2. (i) The Euler discrete model is obtained by approximating ẋ as ẋ x((k + 1)h) x(kh) h Taking the above into account, we obtain x[k + 1] x[k]. h (ii) The exact discrete model has the form x[k + 1] (I 2 + ha)x[k] + hbu[k] 1 h x[h] + u[h] 1 h 1.5 x[h] + u[h]. 1.5 x[k + 1] Φx[k] + Γu[k] ( ) h e Ah x[k] + e Aη dη bu[k]. Note that In our case Hence, e Ah I 2 + Ah A2 h 2 + Φ e Ah I 2 + Ah 6 A 2 O. 1 h
7 Next, Γ ( ) h e Aη dη b h 1 η dη 1 1 h h2 /2 h2 /2 h 1 h For the discrete-time model, x[k + 1] 1 x[k] + 1 u[k] y[k] [ 1 ] x[k]. construct the augmented discrete-time system that you would use in the design of a model predictive controller. The augmented discrete-time system has the form x a [k + 1] Φ a x a [k] + Γ a u[k] y[k] C a x a [k], where the augmented state vector is defined as x a [k] x[k] y[k], and Φ a Φ O CΦ I p 1 1 1, 7
8 and Γ a Γ 1 CΓ, C a [ ] [ O I p 1 ]. 6. Consider the following model of a discrete-time system, x(k + 1) 2x(k) + u(k), x(), k 2 Use the Lagrange multiplier approach to calculate the optimal control sequence {u(),u(1),u(2)} that transfers the initial state x() to x(3) 7 while minimizing the performance index J 1 2 u(k) 2 2 k We begin by defining the composite input vector u [ u() u(1) u(2) ]. Then the performance index J can be represented as J 1 2 u u. Next, write the plant model in the form mu f, where m R 1 3 and f is a scalar. We proceed as follows. First we write x(2) 2x(1) + u(1) 4x() + 2u() + u(1). 8
9 Using the above, we obtain x(3) 7 2x(2) + u(2) 8x() + 4u() + 2u(1) + u(2). We represent the above in the format mu f as follows [ ] u() u(1) 7 u(2) Thus we formulated the problem of finding the optimal control sequence as a constrained optimization problem min 1 2 u u subject to mu f To solve the above problem, we form the Lagrangian l(u,λ) 1 2 u u + λ (mu f), where λ is the Lagrange multiplier. Applying the Lagrange s first-order condition we get u + m λ and mu f. From the first of the above conditions, we calculate, u m λ. Substituting the above into the second of the Lagrange conditions gives λ ( mm ) 1 f. Combining the last two equations, we obtain a closed-form formula for the optimal input sequence u m ( mm ) 1 f In our problem, u u() u(1) u(2) f (mm ) m
10 7. (15 pts) Find all the points that satisfy the KKT conditions for the following optimization problem: minimize x x 2 2 subject to x x We form the Lagrangian function, l(x,µ) x x µ(4 x 2 1 2x 2 2). The KKT conditions take the form, Dxl(x,µ) [ 2x 1 2µx 1 µ(4 x 2 1 2x 2 2) 8x 2 4µx 2 ] µ 4 x 2 1 2x 2 2. From the first of the above equality, we obtain (1 µ)x 1 (2 µ)x 2. We first consider the case when µ. Then, we obtain the point which does not satisfy the constraints. x (1), The next case is when µ 1. Then we have to have x 2 and using µ(4 x 2 1 2x 2 2) gives x (2) 2 and x (3) 2. For the case when µ 2, we have to have x 1 and we get x (4) and x (5)
An Introduction to Model-based Predictive Control (MPC) by
ECE 680 Fall 2017 An Introduction to Model-based Predictive Control (MPC) by Stanislaw H Żak 1 Introduction The model-based predictive control (MPC) methodology is also referred to as the moving horizon
More informationControl Design. Lecture 9: State Feedback and Observers. Two Classes of Control Problems. State Feedback: Problem Formulation
Lecture 9: State Feedback and s [IFAC PB Ch 9] State Feedback s Disturbance Estimation & Integral Action Control Design Many factors to consider, for example: Attenuation of load disturbances Reduction
More informationOPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28
OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from
More informationLecture 4 Continuous time linear quadratic regulator
EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More informationChapter 6 State-Space Design
Chapter 6 State-Space Design wo steps. Assumption is made that we have all the states at our disposal for feedback purposes (in practice, we would not measure all these states). his allows us to implement
More informationDiscrete and continuous dynamic systems
Discrete and continuous dynamic systems Bounded input bounded output (BIBO) and asymptotic stability Continuous and discrete time linear time-invariant systems Katalin Hangos University of Pannonia Faculty
More informationOptimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.
Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation
More informationPontryagin s Minimum Principle 1
ECE 680 Fall 2013 Pontryagin s Minimum Principle 1 In this handout, we provide a derivation of the minimum principle of Pontryagin, which is a generalization of the Euler-Lagrange equations that also includes
More informationProblem 1 Cost of an Infinite Horizon LQR
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework
More informationTo keep things simple, let us just work with one pattern. In that case the objective function is defined to be. E = 1 2 xk d 2 (1)
Backpropagation To keep things simple, let us just work with one pattern. In that case the objective function is defined to be E = 1 2 xk d 2 (1) where K is an index denoting the last layer in the network
More informationECE580 Solution to Problem Set 6
ECE580 Fall 2015 Solution to Problem Set 6 December 23 2015 1 ECE580 Solution to Problem Set 6 These problems are from the textbook by Chong and Zak 4th edition which is the textbook for the ECE580 Fall
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationIngegneria dell Automazione - Sistemi in Tempo Reale p.1/28
Ingegneria dell Automazione - Sistemi in Tempo Reale Selected topics on discrete-time and sampled-data systems Luigi Palopoli palopoli@sssup.it - Tel. 050/883444 Ingegneria dell Automazione - Sistemi in
More informationControl engineering sample exam paper - Model answers
Question Control engineering sample exam paper - Model answers a) By a direct computation we obtain x() =, x(2) =, x(3) =, x(4) = = x(). This trajectory is sketched in Figure (left). Note that A 2 = I
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationNonlinear Model Predictive Control Tools (NMPC Tools)
Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator
More informationAustralian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System
Australian Journal of Basic and Applied Sciences, 3(4): 4267-4273, 29 ISSN 99-878 Modern Control Design of Power System Atef Saleh Othman Al-Mashakbeh Tafila Technical University, Electrical Engineering
More informationECSE.6440 MIDTERM EXAM Solution Optimal Control. Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004
ECSE.6440 MIDTERM EXAM Solution Optimal Control Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004 This is a take home exam. It is essential to SHOW ALL STEPS IN YOUR WORK. In NO circumstance is
More informationTopic # Feedback Control Systems
Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationAdvanced Mechatronics Engineering
Advanced Mechatronics Engineering German University in Cairo 21 December, 2013 Outline Necessary conditions for optimal input Example Linear regulator problem Example Necessary conditions for optimal input
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture
More informationPontryagin s maximum principle
Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin
More information8. Constrained Optimization
8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer
More informationSuppose that we have a specific single stage dynamic system governed by the following equation:
Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationfor changing independent variables. Most simply for a function f(x) the Legendre transformation f(x) B(s) takes the form B(s) = xs f(x) with s = df
Physics 106a, Caltech 1 November, 2018 Lecture 10: Hamiltonian Mechanics I The Hamiltonian In the Hamiltonian formulation of dynamics each second order ODE given by the Euler- Lagrange equation in terms
More informationGeneralized Coordinates, Lagrangians
Generalized Coordinates, Lagrangians Sourendu Gupta TIFR, Mumbai, India Classical Mechanics 2012 August 10, 2012 Generalized coordinates Consider again the motion of a simple pendulum. Since it is one
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationIterative methods to compute center and center-stable manifolds with application to the optimal output regulation problem
Iterative methods to compute center and center-stable manifolds with application to the optimal output regulation problem Noboru Sakamoto, Branislav Rehak N.S.: Nagoya University, Department of Aerospace
More informationIntegrodifferential Hyperbolic Equations and its Application for 2-D Rotational Fluid Flows
Integrodifferential Hyperbolic Equations and its Application for 2-D Rotational Fluid Flows Alexander Chesnokov Lavrentyev Institute of Hydrodynamics Novosibirsk, Russia chesnokov@hydro.nsc.ru July 14,
More informationValue Function and Optimal Trajectories for some State Constrained Control Problems
Value Function and Optimal Trajectories for some State Constrained Control Problems Hasnaa Zidani ENSTA ParisTech, Univ. of Paris-saclay "Control of State Constrained Dynamical Systems" Università di Padova,
More informationMATH 189, MATHEMATICAL METHODS IN CLASSICAL AND QUANTUM MECHANICS. HOMEWORK 2. DUE WEDNESDAY OCTOBER 8TH,
MATH 189, MATHEMATICAL METHODS IN CLASSICAL AND QUANTUM MECHANICS. HOMEWORK 2. DUE WEDNESDAY OCTOBER 8TH, 2014 HTTP://MATH.BERKELEY.EDU/~LIBLAND/MATH-189/HOMEWORK.HTML 1. Lagrange Multipliers (Easy) Exercise
More informationMassachusetts Institute of Technology Department of Physics. Final Examination December 17, 2004
Massachusetts Institute of Technology Department of Physics Course: 8.09 Classical Mechanics Term: Fall 004 Final Examination December 17, 004 Instructions Do not start until you are told to do so. Solve
More informationConstrained shape spaces of infinite dimension
Constrained shape spaces of infinite dimension Sylvain Arguillère, Emmanuel Trélat (Paris 6), Alain Trouvé (ENS Cachan), Laurent May 2013 Goal: Adding constraints to shapes in order to better fit the modeled
More informationOptimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:
Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationSteady State Kalman Filter
Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:
More informationLinear dynamical systems with inputs & outputs
EE263 Autumn 215 S. Boyd and S. Lall Linear dynamical systems with inputs & outputs inputs & outputs: interpretations transfer function impulse and step responses examples 1 Inputs & outputs recall continuous-time
More informationLegendre Transforms, Calculus of Varations, and Mechanics Principles
page 437 Appendix C Legendre Transforms, Calculus of Varations, and Mechanics Principles C.1 Legendre Transforms Legendre transforms map functions in a vector space to functions in the dual space. From
More informationElectric and Magnetic Forces in Lagrangian and Hamiltonian Formalism
Electric and Magnetic Forces in Lagrangian and Hamiltonian Formalism Benjamin Hornberger 1/26/1 Phy 55, Classical Electrodynamics, Prof. Goldhaber Lecture notes from Oct. 26, 21 Lecture held by Prof. Weisberger
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationExtensions and applications of LQ
Extensions and applications of LQ 1 Discrete time systems 2 Assigning closed loop pole location 3 Frequency shaping LQ Regulator for Discrete Time Systems Consider the discrete time system: x(k + 1) =
More informationSecond Order Optimality Conditions for Constrained Nonlinear Programming
Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)
More informationStatic and Dynamic Optimization (42111)
Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationTrajectory-tracking control of a planar 3-RRR parallel manipulator
Trajectory-tracking control of a planar 3-RRR parallel manipulator Chaman Nasa and Sandipan Bandyopadhyay Department of Engineering Design Indian Institute of Technology Madras Chennai, India Abstract
More informationCHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73
CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly
More informationSection 8.3. First Order Linear Differential Equations
Section 8.3 First Order Linear Differential Equations We will now consider closed form solutions for another important class of differential equations. A differential equation ẋ = f(x, t with x( = x is
More informationMATHEMATICAL PHYSICS
MATHEMATICAL PHYSICS Third Year SEMESTER 1 015 016 Classical Mechanics MP350 Prof. S. J. Hands, Prof. D. M. Heffernan, Dr. J.-I. Skullerud and Dr. M. Fremling Time allowed: 1 1 hours Answer two questions
More informationThe Particle-Field Hamiltonian
The Particle-Field Hamiltonian For a fundamental understanding of the interaction of a particle with the electromagnetic field we need to know the total energy of the system consisting of particle and
More informationAB-267 DYNAMICS & CONTROL OF FLEXIBLE AIRCRAFT
FLÁIO SILESTRE DYNAMICS & CONTROL OF FLEXIBLE AIRCRAFT LECTURE NOTES LAGRANGIAN MECHANICS APPLIED TO RIGID-BODY DYNAMICS IMAGE CREDITS: BOEING FLÁIO SILESTRE Introduction Lagrangian Mechanics shall be
More informationsc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11
sc46 - Control Systems Design Q Sem Ac Yr / Mock Exam originally given November 5 9 Notes: Please be reminded that only an A4 paper with formulas may be used during the exam no other material is to be
More informationOptimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints
Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints 1 Objectives Optimization of functions of multiple variables subjected to equality constraints
More informationJoint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018
EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,
More informationClassical Mechanics in Hamiltonian Form
Classical Mechanics in Hamiltonian Form We consider a point particle of mass m, position x moving in a potential V (x). It moves according to Newton s law, mẍ + V (x) = 0 (1) This is the usual and simplest
More informationAdvanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification
Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification 1. Consider the time series x(k) = β 1 + β 2 k + w(k) where β 1 and β 2 are known constants
More informationCourse on Model Predictive Control Part II Linear MPC design
Course on Model Predictive Control Part II Linear MPC design Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria,
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More information3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints. g j (x) = c j, j = 1,...
3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints g j (x) = c j, j = 1,..., m < n, where c 1,..., c m are given numbers. Theorem 3.1. Let
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationAlgebra 2 Matrices. Multiple Choice Identify the choice that best completes the statement or answers the question. 1. Find.
Algebra 2 Matrices Review Multiple Choice Identify the choice that best completes the statement or answers the question. 1. Find. Evaluate the determinant of the matrix. 2. 3. A matrix contains 48 elements.
More informationSufficient Conditions for Finite-variable Constrained Minimization
Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationClassical Mechanics Review (Louisiana State University Qualifier Exam)
Review Louisiana State University Qualifier Exam Jeff Kissel October 22, 2006 A particle of mass m. at rest initially, slides without friction on a wedge of angle θ and and mass M that can move without
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationThe Linear Quadratic Regulator
10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.
More informationCDS Final Exam
CDS 22 - Final Exam Instructor: Danielle C. Tarraf December 4, 2007 INSTRUCTIONS : Please read carefully! () Description & duration of the exam: The exam consists of 6 problems. You have a total of 24
More informationLink lecture - Lagrange Multipliers
Link lecture - Lagrange Multipliers Lagrange multipliers provide a method for finding a stationary point of a function, say f(x, y) when the variables are subject to constraints, say of the form g(x, y)
More informationCONSTRAINED AND NON-LINEAR LEAST SQUARES
EE-602 STATISTICAL SIGNAL PROCESSING TERM PAPER REPORT ON CONSTRAINED AND NON-LINEAR LEAST SQUARES SUBMITTED BY: PRATEEK TAMRAKAR (Y804048) GUIDED BY: DR. RAJESH M. HEGDE SAURABH AGRAWAL (Y804058) DEPARTMENT
More informationAnalysis and Control of Multi-Robot Systems. Elements of Port-Hamiltonian Modeling
Elective in Robotics 2014/2015 Analysis and Control of Multi-Robot Systems Elements of Port-Hamiltonian Modeling Dr. Paolo Robuffo Giordano CNRS, Irisa/Inria! Rennes, France Introduction to Port-Hamiltonian
More informationDynamics and Control of Rotorcraft
Dynamics and Control of Rotorcraft Helicopter Aerodynamics and Dynamics Abhishek Department of Aerospace Engineering Indian Institute of Technology, Kanpur February 3, 2018 Overview Flight Dynamics Model
More informationComputational Methods in Optimal Control Lecture 8. hp-collocation
Computational Methods in Optimal Control Lecture 8. hp-collocation William W. Hager July 26, 2018 10,000 Yen Prize Problem (google this) Let p be a polynomial of degree at most N and let 1 < τ 1 < τ 2
More informationTutorial on Control and State Constrained Optimal Control Problems
Tutorial on Control and State Constrained Optimal Control Problems To cite this version:. blems. SADCO Summer School 211 - Optimal Control, Sep 211, London, United Kingdom. HAL Id: inria-629518
More informationReview questions for Math 111 final. Please SHOW your WORK to receive full credit Final Test is based on 150 points
Please SHOW your WORK to receive full credit Final Test is based on 150 points 1. True or False questions (17 pts) a. Common Logarithmic functions cross the y axis at (0,1) b. A square matrix has as many
More informationCBE507 LECTURE III Controller Design Using State-space Methods. Professor Dae Ryook Yang
CBE507 LECTURE III Controller Design Using State-space Methods Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University Korea University III -1 Overview States What
More informationProblem 1: Lagrangians and Conserved Quantities. Consider the following action for a particle of mass m moving in one dimension
105A Practice Final Solutions March 13, 01 William Kelly Problem 1: Lagrangians and Conserved Quantities Consider the following action for a particle of mass m moving in one dimension S = dtl = mc dt 1
More informationSampling of Linear Systems
Sampling of Linear Systems Real-Time Systems, Lecture 6 Karl-Erik Årzén January 26, 217 Lund University, Department of Automatic Control Lecture 6: Sampling of Linear Systems [IFAC PB Ch. 1, Ch. 2, and
More informationCalculus of Variations
ECE 68 Midterm Exam Solution April 1, 8 1 Calculus of Variations This exam is open book and open notes You may consult additional references You may even discuss the problems (with anyone), but you must
More information10/8/2015. Control Design. Pole-placement by state-space methods. Process to be controlled. State controller
Pole-placement by state-space methods Control Design To be considered in controller design * Compensate the effect of load disturbances * Reduce the effect of measurement noise * Setpoint following (target
More informationOPTIMAL CONTROL SYSTEMS
SYSTEMS MIN-MAX Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University OUTLINE MIN-MAX CONTROL Problem Definition HJB Equation Example GAME THEORY Differential Games Isaacs
More informationInfinite Horizon LQ. Given continuous-time state equation. Find the control function u(t) to minimize
Infinite Horizon LQ Given continuous-time state equation x = Ax + Bu Find the control function ut) to minimize J = 1 " # [ x T t)qxt) + u T t)rut)] dt 2 0 Q $ 0, R > 0 and symmetric Solution is obtained
More informationExam. 135 minutes, 15 minutes reading time
Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationIntroduction Linear system Nonlinear equation Interpolation
Interpolation Interpolation is the process of estimating an intermediate value from a set of discrete or tabulated values. Suppose we have the following tabulated values: y y 0 y 1 y 2?? y 3 y 4 y 5 x
More informationHamilton-Jacobi-Bellman Equation Feb 25, 2008
Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS
More informationUNIVERSITY OF OSLO. Please make sure that your copy of the problem set is complete before you attempt to answer anything.
UNIVERSITY OF OSLO Faculty of mathematics and natural sciences Examination in MAT2440 Differential equations and optimal control theory Day of examination: 11 June 2015 Examination hours: 0900 1300 This
More information1 Steady State Error (30 pts)
Professor Fearing EECS C28/ME C34 Problem Set Fall 2 Steady State Error (3 pts) Given the following continuous time (CT) system ] ẋ = A x + B u = x + 2 7 ] u(t), y = ] x () a) Given error e(t) = r(t) y(t)
More informationECONOMICS 207 SPRING 2006 LABORATORY EXERCISE 5 KEY. 8 = 10(5x 2) = 9(3x + 8), x 50x 20 = 27x x = 92 x = 4. 8x 2 22x + 15 = 0 (2x 3)(4x 5) = 0
ECONOMICS 07 SPRING 006 LABORATORY EXERCISE 5 KEY Problem. Solve the following equations for x. a 5x 3x + 8 = 9 0 5x 3x + 8 9 8 = 0(5x ) = 9(3x + 8), x 0 3 50x 0 = 7x + 7 3x = 9 x = 4 b 8x x + 5 = 0 8x
More informationRobotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007
Robotics & Automation Lecture 25 Dynamics of Constrained Systems, Dynamic Control John T. Wen April 26, 2007 Last Time Order N Forward Dynamics (3-sweep algorithm) Factorization perspective: causal-anticausal
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationFINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS. BORIS MORDUKHOVICH Wayne State University, USA
FINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS BORIS MORDUKHOVICH Wayne State University, USA International Workshop Optimization without Borders Tribute to Yurii Nesterov
More informationOptimisation and optimal control 1: Tutorial solutions (calculus of variations) t 3
Optimisation and optimal control : Tutorial solutions calculus of variations) Question i) Here we get x d ) = d ) = 0 Hence ṫ = ct 3 where c is an arbitrary constant. Integrating, xt) = at 4 + b where
More informationClassical Mechanics Comprehensive Exam Solution
Classical Mechanics Comprehensive Exam Solution January 31, 011, 1:00 pm 5:pm Solve the following six problems. In the following problems, e x, e y, and e z are unit vectors in the x, y, and z directions,
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More information