Direct Optimal Control and Costate Estimation Using Least Square Method
|
|
- Rolf Bradley
- 6 years ago
- Views:
Transcription
1 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 WeB22.1 Direct Optimal Control and Costate Estimation Using Least Square Method Baljeet Singh and Raktim Bhattacharya Aerospace Engineering Department, Texas A&M University, College Station, Texas 77843, USA. Abstract In this paper, we present a direct method to solve optimal control problems based on the least square formulation of the state dynamics. In this approach, we approximate the state and control variables in a finite dimensional Hilbert space. We impose the state dynamics as a weighted integral formulation based on the least square method to solve initial value problems. We analyze the resulting nonlinear programming problem to derive a set of conditions under which the costates of the optimal control problem can be estimated from the associated Karush-Kuhn-Tucker multipliers. We present numerical examples to demonstrate the applicability of the present method. I. ITRODUCTIO Direct methods to solve optimal control problems have become extremely popular. In a direct method, the optimal control problem (OCP) is discretized by parameterizing the unknowns to transcribe the continuous time OCP into a finite-dimensional nonlinear programming problem (LP)[1]. The P is then solved using numerical optimization techniques. Compared to indirect methods, a direct method requires much less analytical derivation, and the resulting P can be solved with relative ease. However, direct methods do not provide information about the optimality of the solution. The Minimum Principle can be used to check the optimality of a solution resulting from a direct method, however, this requires estimates of the dual variables. Therefore, the equivalence between dualization and discretization, for a direct method needs to be investigated. Direct methods for optimal control problems differ in how they approximate the state equations. Most of these methods are based on collocation techniques, where the constraints are imposed at a set of discrete time instances, popular among these methods are the Hermite- Simpson (HS) and the pseudospectral (PS) methods [2]. In another category of direct methods, known as taumethods [3], [4], [5], [6], [7], global orthogonal polynomials are used to parameterize the state and control Graduate student, b singh@tamu.edu Assistant Professor, raktim@tamu.edu trajectories, and the polynomial coefficients are treated as the optimization variables. For many direct methods the costates can be estimated from the Karush-Kuhn-Tucker (KKT) multipliers of the LP. In this framework, Stryk and Blurisch [8] showed the equivalence between the two for the HS method. Hager [9] presented the convergence analysis for Runge- Kutta based direct methods. Ross and Fahroo [1], [11] presented a similar result for Legendre pseudospectral method, where a set of closure conditions are derived under which the costates can be estimated from the KKT mutipliers. Williams [12] generalizes the same result for the Jacobi pseudospectral method. Benson et al. [13] have shown equivalence between the discrete costates and the KKT multipliers for the Gauss pseudospectral method. More recently, Singh et al. [14] presented the optimality analysis for the method of Hilbert space projection (MHSP). In this paper, we present a direct method based on the least square formulation of the state dynamics. In the least square method for optimal control (LSM oc ), we approximate the state and control variables as linear combinations of a priori selected basis functions of a Hilbert space. The state dynamics is imposed as a weighted integral formulation derived from the least square method to solve initial value problems. The LSM oc is flexible with respect to the choice of approximating functions, where both local and global basis functions can be employed. Another main contribution of this paper is the derivation of a costate estimation estimation procedure for the LSM oc. We examine the KKT conditions associated with the direct optimization solution and the discretized first-order necessary conditions for the optimal control problem to define a set of equivalence conditions under which the two approaches are completely equivalent, in which case the costate estimates can be obtained from the KKT multipliers of the LP. The paper is organized as follows. Section II defines the optimal control problem of interest. The detailed /1/$ AACC 1556
2 description of the LSM oc is presented in Section III. In Section IV, we derive the equivalence conditions and the costate estimates for the LSM oc. In Section V presents numerical examples to demonstrate the applicability of the present method. II. PROBLEM FORMULATIO A. Continuous-time Mayer Problem: M Without loss of generality, we consider an optimal control problem in Mayer form. We consider a continuous time autonomous system with free final time. The objective is to determine the state-control pair {X(τ) R n,u(τ) R m ;τ [,τ f ]} and time instance τ f, that minimize the cost, subject to the state dynamics, J = Ψ(X(τ f ),τ f ), (1) Ẋ(τ) = f(x(τ),u(τ)), (2) and end-point state equality constraints, X() = x ; ψ(x(τ f ),τ f ) = R p. (3) It is assumed that the optimal solution to the above problem exists, and the constraint qualifications required to apply the first-order optimality conditions are implicitly assumed. We consider an autonomous system because an extension to a non-autonomous system is straight forward. Currently, we do not consider any state or control path constraints. However, the present analysis can be extended to include path constraints, which is the scope of future work. III. THE LEAST SQUARE METHOD FOR OPTIMAL COTROL (LSM oc ) In a direct method to solve an optimal control problem, state and control trajectories are first approximated using a known functional form with a set of coefficients to be determined. Then, a differentiation or integration based method is applied to transform the state dynamics into a set of equations in the unknown coefficients [2]. In our approach, we approximate the state dynamics using the least square method based on the following theorem, Theorem 1: Consider an initial value problem (IVP), r(t) = g(z(t)) ż(t) =, t [,1]; z() = a. (4) Let H be an -dimensional Hilbert space equipped with a norm H and an inner-product, H, spanned by a set of linearly independent basis functions {φ j (t),t [,1]}. If z(t) = α jφ j (t) H, then z(t) is the stationary solution of the functional, J = r 2 H ν(z() a), (5) where ν is a lagrange multiplier. Proof: The stationary conditions are given by, r, r α j H 1 2 νφ j() =, j = 1,..,, (6) z() = a, (7) which are trivially satisfied with ν =. In the view of Theorem 1, we describe a direct method to solve problem M. In this method, we approximate the state dynamics as a stationary solution of (5) on a finitine-dimensional subspace V of H. We approximate the state and control trajectories in V := span{φ 1 (t),φ 2 (t),..,φ (t),t [,1]}, with a linearly independent basis set {φ j (t)}, and an inner product defined as, p,q = p T (t)q(t)dt; p(t),q(t) V. (8) We scale problem M appropriately so that the state and control trajectories can be approximated in V. Since V is defined over the time domain [,1], we use the following transformation to map the problem from the physical domain τ [,τ f ] to the computational domain t [,1], τ(t) = τ f t. (9) The state and control trajectories are approximated as x(t),û(t) V, so that, x(t) = X(τ(t)) x(t) = u(t) = U(τ(t)) û(t) = α k φ k (t), (1) β k φ k (t), (11) where, α k R n and β k R m are the unknowns. Differentiating the expression in (1) with respect to τ and using (9), we get, dx(τ(t)) dτ = 1 τ f ẋ(t) 1 τ f x(t) = 1 τ f α k φ k (t), (12) where an overdot denotes the derivative with respect to t. Using (2) and (12), we define the residual error in state dynamics as, r(t) = τ f f( x(t),û(t)) x(t). (13) Using (6), (7) and (8), we approximate the state dynamics as the following weighted integral form, [τ f f T x ( x(t),û(t))φ j (t) I φ j (t)]r(t)dt 1 2 νφ j() =, j = 1,..,, (14) x() x =, (15) 1557
3 with the end-point equality constraint in (3) imposed as, ψ( x(1),τ f ) =. (16) Here ν R n is an unknown to be determined. A. onlinear Programming Problem (LP): M φ Function approximation of state and control trajectories using (1) and (11), combined with the least square formulation as in (14), (15) and (16), transcribe problem M into a finite dimensional nonlinear programming problem, denoted as Problem M φ. For the subsequent treatment, we denote the approximate state dynamics as: f(αk,β k,φ k (t)) = f( x(t),û(t)). (17) Using similar notation for all other functionals, Problem M φ is to determine {α k R n,β k R m }, ν Rn and time instance τ f, that minimize the cost, subject to the constraints, Ĵ = Ψ(α k,φ k (1),τ f ), (18) [τ f ft x (α k,β k,φ k )φ j I φ j ][τ f f(αk,β k,φ k ) α k φ k ]dt 1 2 νφ j() =, (19) α k φ k () x =, ψ(α k,φ k (1),τ f ) =, (2) where j = 1,..,. Problem M φ constituting (18) to (2) can be solved using standard numerical optimization software. Any numerical quadrature scheme can be used to evaluate the integral expressions in (19). For the results presented in this paper, we use SOPT[15] as the optimization solver and MATLAB s Symbolic Math Toolbox for integral evaluations. ext, to facilitate the derivation of the costate estimation procedure and related equivalence conditions, we derive the KKT firstorder necessary conditions associated with Problem M φ. B. KKT conditions for Problem M φ : M φλ The Lagrangian for Problem M φ is formed by adjoining the cost function with the constraint equations. For brevity, we use f to denote f(α k,β k,φ k ). Using similar notation for all other variables, we have, J = γ T j γ T j [τ f ft x φ j I φ j ][τ f f α k φ k ]dt 1 2 νφ j() µ T ( α k φ k () x ) Ψη T ψ, (21) where γ j R n, µ R n and η R p are the KKT multipliers associated with the constraints given by (19) and (2) respectively. The KKT first-order necessary conditions are obtained by setting the derivatives of the Lagrangian J with respect to the unknowns {α i,β i,ν,γ i, µ,η,τ f } equal to zero. We have for i = 1,..,, J α i = [ φ i φ j τ f ft x φ i φ j τ f fx φ j φ i τ f ft x τ f fx φ i φ j ]γ j dt [τ f r T fxx φ i φ j ]γ j dt µφ i () [ Ψ x(1) ψ T x(1) η]φ i(1). (22) Using integration by parts, we write, φ i (t) φ j (t)dt = φ i φ j 1 τ f fx φ j φ i dt = τ f fx φ j φ i 1 φ j (t)φ i (t)dt, (23) (τ f fx φ j )φ i dt. (24) Using (22), (23), (24) and re-arranging, we get, also, = [ γ j φ j τ f ft x γ j φ j (τ f fx γ j φ j ) τ f ft x τ f fx γ j φ j ]φ i dt [τ f r T fxx γ j φ j ]φ i dt [ Ψ x(1) ψ T x(1) η]φ i(1) µφ i () [τ f fx γ j φ j (1) [τ f fx γ j φ j () γ j φ j (1)]φ i (1) γ j φ j ()]φ i (). (25) J µ = J α k φ k () x =, = ψ =, (26) η J ν = γ j φ j () =, (27) J = τ f [τ f ft β u fx i j φ j f γ T u γ j φ j ]φ i dt [τ f r T fxu γ j φ j ]φ i dt =, (28) J = [τ f fx φ i I φ i ][τ f f γ i k φ k ]dt α 1 2 νφ i() =, (29) 1558
4 J = τ f γ T j [τ f ft x φ j I φ j ] fdt γ T j f T x rφ j dt [ Ψ τ f η T ψ τ f ] =. (3) Equations (25) through (3) constitute the KKT conditions for Problem M φ. ext, we derive the first-order optimality conditions for problem M which are then discretized to derive equivalence between the costates of the optimal control problem and the KKT multipliers of the associated LP. IV. COSTATE ESTIMATIO As stated earlier, Singh and Bhattacharya [14] derived a set of conditions under which a linear mapping exists between the costates and the KKT multipliers of the nonlinear programming problem for the MHSP. More rigorously, their derivation is based on the commutative nature of problems B λφ and B φλ under a set of closure conditions, where Problem B φλ is the set of KKT conditions associated with the LP and Problem B λφ is the set of discretized first-order optimality conditions. We adopt a similar approach with a slight modification. We introduce set of auxiliary costates which are derived from the true costates of M. We write the first-order optimality conditions for M in term of the auxiliary costates. These auxiliary first-order optimality conditions are then compared to M φλ to derive equivalence conditions under which these two problems commute. A. First-Order Optimality Conditions for M : M λ Problem M can be solved by applying calculus of variations and Pontryagin s minimum principle. In this framework, the first-order necessary conditions for optimality lead to a two-point boundary value problem derived by using the augmented Hamiltonian H and the terminal cost C defined as, H (X,U,Λ) = Λ T (τ)f(x(τ),u(τ)), (31) C (X(τ f ),τ f,υ,κ) = Ψ(X(τ f ),τ f )υ T ψ(x(τ f ),τ f ) κ T (X() x ), (32) where Λ(τ) R n is the costate, υ R p and κ R n are the lagrange multipliers. Time dependence of state and control trajectories has been dropped for brevity. Problem M seeks to find the functions {X(τ),U(τ),Λ(τ);τ [,τ f ]}, vectors υ,κ and time instance τ f, that satisfy the following conditions, Ẋ = f, X() x =, H u = f T u Λ = ΛH x = Λf T x Λ =, {Λ(),Λ(τ f )} = { κ,c x(τ f )}, ψ(x(τ f ),τ f ) =,, H τ=τ f = C τ f. (33) B. Auxiliary First-Order Optimality Conditions: M ρ Here we introduce a set of auxiliary costates ρ(t) R n, t [,1] which lead to the estimation the actual costates λ(t) by establishing equivalence with the KKT multipliers of the LP Problem M φλ. We define ρ(t) as the solution of the differential equation, Λ(τ(t)) = λ(t) = τ f f x ρ(t) ρ(t); ρ() =. (34) The auxiliary first order optimality conditions are derived from (33) using (34) and (9), ẋ(t) = τ f f(x,u), x() x =, ψ(x(1),τ f ) = ρ τ f (f x ρ) τ f f T x ρ τ2 f f T x f x ρ =, H u = τ f f T u(τ f f x ρ ρ) =, ρ() = τ f f x ρ() ρ() = κ, τ f f x ρ(1) ρ(1) = C x(τ f ), (τ f ρ T f T x ρ T )f t=1 Ψ τ f υ T ψ τ f =. (35) C. Discretized First-Order Optimality Conditions: M ρφ Problem M ρ as defined by equation set (35) must be discretized to obtain conditions for optimality in the functional space V. The auxiliary costates are approximated as, ρ(t) = γ k φ k (t), (36) where γ k R n. Using Eqns. (1), (11), (36) we obtain the following weighted integral equations for equation set (35), = = = = [τ f ft x φ i I φ i ][τ f f α k φ k ]dt 1 2 πφ i(), (37) [ γ k φ k τ f ( f x γ k φ k ) τ f ft x γ k φ k τ 2 f f T x f x γ k φ k ]φ i dt, (38) τ f [ f T u(τ f fx γ k φ k γ k φ k )]φ i dt, (39) α k φ k () x, = ψ(α k,φ k (1),τ f ), (4) κ = τ f fx γ k φ k () C x(1) = τ f fx γ k φ k (1) = (τ f γ T k φ k f T x γ k φ k (), (41) γ k φ k (1), = γ k φ k (), (42) γ T k φ k ) f t=1 C τ f, (43) 1559
5 156
6 1561
A Gauss Lobatto quadrature method for solving optimal control problems
ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes
More informationGauss Pseudospectral Method for Solving Infinite-Horizon Optimal Control Problems
AIAA Guidance, Navigation, and Control Conference 2-5 August 21, Toronto, Ontario Canada AIAA 21-789 Gauss Pseudospectral Method for Solving Infinite-Horizon Optimal Control Problems Divya Garg William
More informationLegendre Pseudospectral Approximations of Optimal Control Problems
Legendre Pseudospectral Approximations of Optimal Control Problems I. Michael Ross 1 and Fariba Fahroo 1 Department of Aeronautics and Astronautics, Code AA/Ro, Naval Postgraduate School, Monterey, CA
More informationA WEIGHTED RESIDUAL FRAMEWORK FOR FORMULATION AND ANALYSIS OF DIRECT TRANSCRIPTION METHODS FOR OPTIMAL CONTROL. A Dissertation BALJEET SINGH
A WEIGHTED RESIDUAL FRAMEWORK FOR FORMULATION AND ANALYSIS OF DIRECT TRANSCRIPTION METHODS FOR OPTIMAL CONTROL A Dissertation by BALJEET SINGH Submitted to the Office of Graduate Studies of Texas A&M University
More informationGeoffrey T. Huntington Blue Origin, LLC. Kent, WA William W. Hager Department of Mathematics University of Florida Gainesville, FL 32611
Direct Trajectory Optimization and Costate Estimation of Finite-Horizon and Infinite-Horizon Optimal Control Problems Using a Radau Pseudospectral Method Divya Garg Michael A. Patterson Camila Francolin
More informationConvergence of a Gauss Pseudospectral Method for Optimal Control
Convergence of a Gauss Pseudospectral Method for Optimal Control Hongyan Hou William W. Hager Anil V. Rao A convergence theory is presented for approximations of continuous-time optimal control problems
More informationCONVERGENCE RATE FOR A RADAU COLLOCATION METHOD APPLIED TO UNCONSTRAINED OPTIMAL CONTROL
CONVERGENCE RATE FOR A RADAU COLLOCATION METHOD APPLIED TO UNCONSTRAINED OPTIMAL CONTROL WILLIAM W. HAGER, HONGYAN HOU, AND ANIL V. RAO Abstract. A local convergence rate is established for an orthogonal
More informationDivya Garg Michael A. Patterson Camila Francolin Christopher L. Darby Geoffrey T. Huntington William W. Hager Anil V. Rao
Comput Optim Appl (2011) 49: 335 358 DOI 10.1007/s10589-009-9291-0 Direct trajectory optimization and costate estimation of finite-horizon and infinite-horizon optimal control problems using a Radau pseudospectral
More informationAutomatica. A unified framework for the numerical solution of optimal control problems using pseudospectral methods
Automatica 46 (2010) 1843 1851 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper A unified framework for the numerical solution of optimal
More informationConvergence Rate for a Gauss Collocation Method Applied to Unconstrained Optimal Control
J Optim Theory Appl (2016) 169:801 824 DOI 10.1007/s10957-016-0929-7 Convergence Rate for a Gauss Collocation Method Applied to Unconstrained Optimal Control William W. Hager 1 Hongyan Hou 2 Anil V. Rao
More informationc 2018 Society for Industrial and Applied Mathematics
SIAM J. CONTROL OPTIM. Vol. 56, No. 2, pp. 1386 1411 c 2018 Society for Industrial and Applied Mathematics CONVERGENCE RATE FOR A GAUSS COLLOCATION METHOD APPLIED TO CONSTRAINED OPTIMAL CONTROL WILLIAM
More informationAutomatica. Pseudospectral methods for solving infinite-horizon optimal control problems
Automatica 47 (2011) 829 837 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper Pseudospectral methods for solving infinite-horizon optimal
More informationAN OVERVIEW OF THREE PSEUDOSPECTRAL METHODS FOR THE NUMERICAL SOLUTION OF OPTIMAL CONTROL PROBLEMS
(Preprint) AAS 9-332 AN OVERVIEW OF THREE PSEUDOSPECTRAL METHODS FOR THE NUMERICAL SOLUTION OF OPTIMAL CONTROL PROBLEMS Divya Garg, Michael A. Patterson, William W. Hager, and Anil V. Rao University of
More informationA MATLAB R Package for Dynamic Optimization Using the Gauss Pseudospectral Method
User s Manual for GPOPS Version 2.1: A MATLAB R Package for Dynamic Optimization Using the Gauss Pseudospectral Method Anil V. Rao University of Florida Gainesville, FL 32607 David Benson The Charles Stark
More informationTRAJECTORY OPTIMIZATION USING COLLOCATION AND EVOLUTIONARY PROGRAMMING FOR CONSTRAINED NONLINEAR DYNAMICAL SYSTEMS BRANDON MERLE SHIPPEY
TRAJECTORY OPTIMIZATION USING COLLOCATION AND EVOLUTIONARY PROGRAMMING FOR CONSTRAINED NONLINEAR DYNAMICAL SYSTEMS by BRANDON MERLE SHIPPEY Presented to the Faculty of the Graduate School of The University
More informationComputational Methods in Optimal Control Lecture 8. hp-collocation
Computational Methods in Optimal Control Lecture 8. hp-collocation William W. Hager July 26, 2018 10,000 Yen Prize Problem (google this) Let p be a polynomial of degree at most N and let 1 < τ 1 < τ 2
More informationChapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More informationOptimal Control and Applications
V. M. Becerra - Session 1-2nd AstroNet-II Summer School Optimal Control and Applications Session 1 2nd AstroNet-II Summer School Victor M. Becerra School of Systems Engineering University of Reading 5-6
More informationarxiv: v1 [math.oc] 1 Feb 2015
HIGH ORDER VARIATIONAL INTEGRATORS IN THE OPTIMAL CONTROL OF MECHANICAL SYSTEMS CÉDRIC M. CAMPOS, SINA OBER-BLÖBAUM, AND EMMANUEL TRÉLAT arxiv:152.325v1 [math.oc] 1 Feb 215 Abstract. In recent years, much
More informationNumerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients
Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients SADCO Summer School and Workshop on Optimal and Model Predictive Control OMPC 2013, Bayreuth
More informationGraph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems
Graph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems Begüm Şenses Anil V. Rao University of Florida Gainesville, FL 32611-6250 Timothy A. Davis
More informationEngineering Notes. NUMERICAL methods for solving optimal control problems fall
JOURNAL OF GUIANCE, CONTROL, AN YNAMICS Vol. 9, No. 6, November ecember 6 Engineering Notes ENGINEERING NOTES are short manuscripts describing ne developments or important results of a preliminary nature.
More informationControl Theory & Applications. Re-entry trajectory optimization using Radau pseudospectral method. HAN Peng, SHAN Jia-yuan
30 8 013 8 DOI: 10.7641/CTA.013.1041 Control Theory & Applications Vol. 30 No. 8 Aug. 013 Radau ( 100081) : Radau. Legendre-Gauss-Radau Lagrange Legendre-Gauss-Radau. (NLP) NLP SNOPT... : ; ; ; Radau :
More informationPseudospectral Collocation Methods for the Direct Transcription of Optimal Control Problems
RICE UNIVERSITY Pseudospectral Collocation Methods for the Direct Transcription of Optimal Control Problems by Jesse A. Pietz A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree
More informationSecond Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers
2 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 2 WeA22. Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers Christos Gavriel
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationThe Direct Transcription Method For Optimal Control. Part 2: Optimal Control
The Direct Transcription Method For Optimal Control Part 2: Optimal Control John T Betts Partner, Applied Mathematical Analysis, LLC 1 Fundamental Principle of Transcription Methods Transcription Method
More informationLeast-squares Solutions of Linear Differential Equations
1 Least-squares Solutions of Linear Differential Equations Daniele Mortari dedicated to John Lee Junkins arxiv:1700837v1 [mathca] 5 Feb 017 Abstract This stu shows how to obtain least-squares solutions
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationLinear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26
Huiping Cao, Slide 1/26 Classification Linear SVM Huiping Cao linear hyperplane (decision boundary) that will separate the data Huiping Cao, Slide 2/26 Support Vector Machines rt Vector Find a linear Machines
More informationTrajectory Optimization for Ascent and Glide Phases Using Gauss Pseudospectral Method
Trajectory Optimization for Ascent and Glide Phases Using Gauss Pseudospectral Method Abdel Mageed Mahmoud, Chen Wanchun, Zhou Hao, and Liang Yang Abstract The trajectory optimization method for ascent
More informationProjection Methods. Michal Kejak CERGE CERGE-EI ( ) 1 / 29
Projection Methods Michal Kejak CERGE CERGE-EI ( ) 1 / 29 Introduction numerical methods for dynamic economies nite-di erence methods initial value problems (Euler method) two-point boundary value problems
More informationProject Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC)
Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC) Mahdi Ghazaei, Meike Stemmann Automatic Control LTH Lund University Problem Formulation Find the maximum of the
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More information12.0 Properties of orthogonal polynomials
12.0 Properties of orthogonal polynomials In this section we study orthogonal polynomials to use them for the construction of quadrature formulas investigate projections on polynomial spaces and their
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 3 3.4 Differential Algebraic Systems 3.5 Integration of Differential Equations 1 Outline 3.4 Differential Algebraic Systems 3.4.1 Constrained Dynamics 3.4.2 First and Second
More informationA DIFFERENTIAL GAMES APPROACH FOR ANALYSIS OF SPACECRAFT POST-DOCKING OPERATIONS
A DIFFERENTIAL GAMES APPROACH FOR ANALYSIS OF SPACECRAFT POST-DOCKING OPERATIONS By TAKASHI HIRAMATSU A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
More informationCostate approximation in optimal control using integral Gaussian quadrature orthogonal collocation methods
OPTIMAL CONTROL APPLICATIONS AND METHODS Optim. Control Appl. Meth. (2014) Published online in Wiley Online Library (wileyonlinelibrary.com)..2112 Costate approximation in optimal control using integral
More informationCHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67
CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal
More information1 The Problem of Spacecraft Trajectory Optimization
1 The Problem of Spacecraft Trajectory Optimization Bruce A. Conway Dept. of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 1.1 Introduction The subject of spacecraft trajectory
More informationCONVERGENCE RATE FOR A RADAU HP COLLOCATION METHOD APPLIED TO CONSTRAINED OPTIMAL CONTROL
CONVERGENCE RATE FOR A RADAU HP COLLOCATION METHOD APPLIED TO CONSTRAINED OPTIMAL CONTROL WILLIAM W. HAGER, HONGYAN HOU, SUBHASHREE MOHAPATRA, AND ANIL V. RAO Abstract. For control problems with control
More informationOptimal Configuration of Tetrahedral Spacecraft Formations 1
The Journal of the Astronautical Sciences, Vol. 55, No 2, April June 2007, pp. 141 169 Optimal Configuration of Tetrahedral Spacecraft Formations 1 Geoffrey T. Huntington, 2 David Benson, 3 and Anil V.
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationFlatness-based guidance for planetary landing
21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 ThB15.2 Flatness-based guidance for planetary landing Delia Desiderio and Marco Lovera Abstract The problem of guidance
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationSection 6.6 Gaussian Quadrature
Section 6.6 Gaussian Quadrature Key Terms: Method of undetermined coefficients Nonlinear systems Gaussian quadrature Error Legendre polynomials Inner product Adapted from http://pathfinder.scar.utoronto.ca/~dyer/csca57/book_p/node44.html
More informationarxiv: v2 [math.na] 15 Sep 2015
LEBESGUE COSTATS ARISIG I A CLASS OF COLLOCATIO METHODS WILLIAM W. HAGER, HOGYA HOU, AD AIL V. RAO arxiv:1507.08316v [math.a] 15 Sep 015 Abstract. Estimates are obtained for the Lebesgue constants associated
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationTime Domain Inverse Problems in Nonlinear Systems Using Collocation & Radial Basis Functions
Copyright 4 Tech Science Press CMES, vol., no., pp.59-84, 4 Time Domain Inverse Problems in Nonlinear Systems Using Collocation & Radial Basis Functions T.A. Elgohary, L. Dong, J.L. Junkins 3 and S.N.
More informationTwo-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra
Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra Politecnico di Milano Department of Aerospace Engineering Milan, Italy Taylor Methods and Computer Assisted Proofs
More informationGALERKIN TIME STEPPING METHODS FOR NONLINEAR PARABOLIC EQUATIONS
GALERKIN TIME STEPPING METHODS FOR NONLINEAR PARABOLIC EQUATIONS GEORGIOS AKRIVIS AND CHARALAMBOS MAKRIDAKIS Abstract. We consider discontinuous as well as continuous Galerkin methods for the time discretization
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More information2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.
Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following
More informationDuality and dynamics in Hamilton-Jacobi theory for fully convex problems of control
Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationApplications of the homotopy analysis method to optimal control problems
Purdue University Purdue e-pubs Open Access Theses Theses and Dissertations 8-216 Applications of the homotopy analysis method to optimal control problems Shubham Singh Purdue University Follow this and
More informationOPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS
STUDIES IN ASTRONAUTICS 3 OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS JOHNL.JUNKINS Texas A&M University, College Station, Texas, U.S.A. and JAMES D.TURNER Cambridge Research, Division of PRA, Inc., Cambridge,
More informationModeling and Experimentation: Compound Pendulum
Modeling and Experimentation: Compound Pendulum Prof. R.G. Longoria Department of Mechanical Engineering The University of Texas at Austin Fall 2014 Overview This lab focuses on developing a mathematical
More informationCHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73
CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly
More informationTheory and Applications of Constrained Optimal Control Proble
Theory and Applications of Constrained Optimal Control Problems with Delays PART 1 : Mixed Control State Constraints Helmut Maurer 1, Laurenz Göllmann 2 1 Institut für Numerische und Angewandte Mathematik,
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationTonelli Full-Regularity in the Calculus of Variations and Optimal Control
Tonelli Full-Regularity in the Calculus of Variations and Optimal Control Delfim F. M. Torres delfim@mat.ua.pt Department of Mathematics University of Aveiro 3810 193 Aveiro, Portugal http://www.mat.ua.pt/delfim
More informationFitting Linear Statistical Models to Data by Least Squares: Introduction
Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math
More informationPOD for Parametric PDEs and for Optimality Systems
POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,
More informationA radial basis function method for solving optimal control problems.
University of Louisville ThinkIR: The University of Louisville's Institutional Repository Electronic Theses and Dissertations 5-206 A radial basis function method for solving optimal control problems.
More informationOutline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation
Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data
More informationGeneralized B-spline functions method for solving optimal control problems
Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 2, No. 4, 24, pp. 243-255 Generalized B-spline functions method for solving optimal control problems Yousef Edrisi Tabriz
More informationAn efficient hybrid pseudo-spectral method for solving optimal control of Volterra integral systems
MATHEMATICAL COMMUNICATIONS 417 Math. Commun. 19(14), 417 435 An efficient hybrid pseudo-spectral method for solving optimal control of Volterra integral systems Khosrow Maleknejad 1, and Asyieh Ebrahimzadeh
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationFETI domain decomposition method to solution of contact problems with large displacements
FETI domain decomposition method to solution of contact problems with large displacements Vít Vondrák 1, Zdeněk Dostál 1, Jiří Dobiáš 2, and Svatopluk Pták 2 1 Dept. of Appl. Math., Technical University
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationPseudospectral Methods For Op2mal Control. Jus2n Ruths March 27, 2009
Pseudospectral Methods For Op2mal Control Jus2n Ruths March 27, 2009 Introduc2on Pseudospectral methods arose to find solu2ons to Par2al Differen2al Equa2ons Recently adapted for Op2mal Control Key Ideas
More informationIntroduction to the Optimal Control Software GPOPS II
Introduction to the Optimal Control Software GPOPS II Anil V. Rao Department of Mechanical and Aerospace Engineering University of Florida Gainesville, FL 32611-625 Tutorial on GPOPS II NSF CBMS Workshop
More informationOPTIMAL CONTROL CHAPTER INTRODUCTION
CHAPTER 3 OPTIMAL CONTROL What is now proved was once only imagined. William Blake. 3.1 INTRODUCTION After more than three hundred years of evolution, optimal control theory has been formulated as an extension
More informationNonlinear Control Lecture # 14 Input-Output Stability. Nonlinear Control
Nonlinear Control Lecture # 14 Input-Output Stability L Stability Input-Output Models: y = Hu u(t) is a piecewise continuous function of t and belongs to a linear space of signals The space of bounded
More informationThe Simple Double Pendulum
The Simple Double Pendulum Austin Graf December 13, 2013 Abstract The double pendulum is a dynamic system that exhibits sensitive dependence upon initial conditions. This project explores the motion of
More informationLecture 4 Continuous time linear quadratic regulator
EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon
More informationSemi-Analytical Guidance Algorithm for Fast Retargeting Maneuvers Computation during Planetary Descent and Landing
ASTRA 2013 - ESA/ESTEC, Noordwijk, the Netherlands Semi-Analytical Guidance Algorithm for Fast Retargeting Maneuvers Computation during Planetary Descent and Landing Michèle LAVAGNA, Paolo LUNGHI Politecnico
More informationProblem 1 Cost of an Infinite Horizon LQR
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework
More informationPreconditioning for continuation model predictive control
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://wwwmerlcom Preconditioning for continuation model predictive control Knyazev, A; Malyshev, A TR215-112 September 215 Abstract Model predictive control (MPC)
More informationNOTES ON CALCULUS OF VARIATIONS. September 13, 2012
NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,
More informationTriangle Formation Design in Eccentric Orbits Using Pseudospectral Optimal Control
Triangle Formation Design in Eccentric Orbits Using Pseudospectral Optimal Control Qi Gong University of California, Santa Cruz, CA I. Michael Ross Naval Postgraduate School, Monterey, CA K. T. Alfriend
More informationWeek 1. 1 The relativistic point particle. 1.1 Classical dynamics. Reading material from the books. Zwiebach, Chapter 5 and chapter 11
Week 1 1 The relativistic point particle Reading material from the books Zwiebach, Chapter 5 and chapter 11 Polchinski, Chapter 1 Becker, Becker, Schwartz, Chapter 2 1.1 Classical dynamics The first thing
More informationNUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM
(Preprint) AAS 12-638 NUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM Ahmad Bani Younes, and James Turner Many numerical integration
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationConvexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls
1 1 Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls B.T.Polyak Institute for Control Science, Moscow, Russia e-mail boris@ipu.rssi.ru Abstract Recently [1, 2] the new convexity
More informationNonlinear error dynamics for cycled data assimilation methods
Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.
More informationAlgorithm 902: GPOPS, A MATLAB Software for Solving Multiple-Phase Optimal Control Problems Using the Gauss Pseudospectral Method
Algorithm 902: GPOPS, A MATLAB Software for Solving Multiple-Phase Optimal Control Problems Using the Gauss Pseudospectral Method ANIL V. RAO University of Florida DAVID A. BENSON The Charles Stark Draper
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationPseudospectral optimal control of active magnetic bearing systems
Scientia Iranica B (014) 1(5), 1719{175 Sharif University of Technology Scientia Iranica Transactions B: Mechanical Engineering www.scientiairanica.com Research Note Pseudospectral optimal control of active
More information1 Quantum fields in Minkowski spacetime
1 Quantum fields in Minkowski spacetime The theory of quantum fields in curved spacetime is a generalization of the well-established theory of quantum fields in Minkowski spacetime. To a great extent,
More informationLinear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)
Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training
More informationAn hp-adaptive pseudospectral method for solving optimal control problems
OPTIMAL CONTROL APPLICATIONS AND METHODS Optim. Control Appl. Meth. 011; 3:476 50 Published online 6 August 010 in Wiley Online Library (wileyonlinelibrary.com)..957 An hp-adaptive pseudospectral method
More informationInner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.
Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that (1) () () (4) x 1 + x y = x 1 y + x y y x = x y x αy = α x y x x > 0 for x 0 Consequently, (5) (6)
More informationUNIVERSITY OF CALIFORNIA SANTA CRUZ
UNIVERSITY OF CALIFORNIA SANTA CRUZ COMPUTATIONAL OPTIMAL CONTROL OF NONLINEAR SYSTEMS WITH PARAMETER UNCERTAINTY A dissertation submitted in partial satisfaction of the requirements for the degree of
More informationSupport Vector Machines for Regression
COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x
More informationACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016
ACM/CMS 17 Linear Analysis & Applications Fall 216 Assignment 4: Linear ODEs and Control Theory Due: 5th December 216 Introduction Systems of ordinary differential equations (ODEs) can be used to describe
More information