Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems.

Size: px
Start display at page:

Download "Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems."

Transcription

1 ES22 Lecture Notes #11 Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems. J = φ(x( ) + L(x,u,t)dt ; x= f(x,u,t) ; x( t )= x ; ψ(x( ))= (P) t Suppose all necessary condition for (P) have been satisfied. What about sufficiency condition. Recall for static optimization problem with equality constraints, the trick is to expand the augmented criterion to second order and the constraints to first order, i.e.? J = φ(x( ) + λ T (f- x) +L(x,u,t)dt =( φ x -λ T ) δx( )+ [(λ+h x )δx+ H u t? δj = [φ x -λ T ] δx( )+ [(λ T +H x )δx(t)+ H u δu(t)]dt (δxt φ xx δx) + [δx δu] T t to H xx H xu δx H ux H uu δu dt and δx= f x δx + f u δu=- H λ ; δx( t )= δx = ; λ T =- H x, λ T ( )= φ x( ) ; H u = for all t choosing λ T =- H x, λ T ( )= φ x( ) ; H u = for all t we get, δx= f x δx + f u δu=- H λ ; δx( t )= δx = ; and δj = 1 2 (δxt φ xx δx) + t [δx δu] T H xx H xu δx H ux H uu δu dt which is recognized as a LQ problem. Thus if we can show that the minimum of this accessory problem, which is δx(t o ) T S(t o )δx(t o ), is zero or S(t )> then the stationary solution must be a local minimum also. Consequently, the sufficient condition is simply the existence of a positive definite solution of the Riccati equation (also know as the Jacobi condition or the conjugate point condition. See optics example). Practical Rationale for LQ problems Aerospace guidance and control Chemical process control YCHo 11/2/98 1

2 All kinds of automative applications: cruise control, engine control, temperature control Economic growth and resource models for nation, industry, and firms communication networks, computer systems manufacturing plants stock market seismic data processing image analysis weather prediction Infinite time regulator problem the case of in the LQ problem with A, B, Q, R, constant. Intuition ds/dt= since the Q term (decreased S) and the SBR - 1 B T S term (increases S) fight to a stand still. If S( -)=constant than u = - R -1 B T Sx = Kx ==> x = (A +BK)x is a constant coefficient linear system. Question: is it stable? Or does optimality imply stability? Answer: Stability the optimal return function x T Sx V(x) is a Lyapunov function. We have dv/dt = x T Sdx/dt + x T ds/dtx + dx/dt T Sx = x T [(A+BK) T S -SA-A T S-Q+SBR -1 B T S + (A+BK)S]x = x T [-Q-SBR -1 B T S]x < ==> stability Convergence If the system is controllable than we know a finite time control sequence followed by zero u(t) will stabilize the system at zero. The optimal control sequence must have a smaller value for J ==> convergence since nonzero integral of J is constantly increasing. Inhomogeneous LQ problem: J = 1 2 (( x- x f) T S f (x- x f )) + [x- x u- u] T t Q N T N R x- x u- u dt x= Ax + Bu + f(t) ; x( t )= x where x, u, f are given functions of time and x f is given constant. The optimal V(x,t) is (1/2)x T Sx +α T x +β where S still obeys the same Riccati equation and α and β obey linear ODEs dependent on S(t). It is also possible to add linear terms in x and u to the terminal and in-flight portion of the cirterion without changing the general nature of the solution. Two special cases: (i) Terminal Constraints: J = 1 2 u T R u dt ; x = A(t)x +B(t)u ; x( )= Consider t YCHo 11/2/98 2

3 J = ν T x( ) u T R u dt t and the HJBPDE, we get after assuming V(x,t)=(1/2)x T Sx +α T x +β, xt Sx - α T x - β = 1 2 xt [SA + A T S - SB R -1 B T S]x + α T Ax αt BR -1 B T α - α T BR -1 B T Sx Collecting and equating terms in x T x, x, and scalars, we get -S = SA + A T S - SB R -1 B T S ; S( ) = - α = ( A T - SB R -1 B T )α ; a T ( ) = ν T - β = αt BR -1 B T α ; β( ) = which implies S(t) = for all t and dα/dt = -A T α and α(t) = Φ(, t) ν β(t) = ν T { Φ(, τ)b R -1 B T Φ T (, τ)dτ}ν t We can solve for ν assuming controllability via x( ) = = - Φ(, t o )x +{ Φ(, τ)b R -1 B T Φ T (, τ)dτ}ν t NOTE: This solution should be taken with a grain of salt. Since ν depends on x and α(t) depends on ν. Thus we don't really have a feedback solution in the strict sense. (ii) The Least square fit Problem (was a take home quiz problem in 199) Let the scalar time function z(t) sin(t) for t [,]. Let x(t) a+bt. Determine a and b such that J = 1 2 (z-x ) 2 dt is minimized. (i) (ii) (iii) (iv) (2%) convert this problem to an nonstandard inhomogeneous Linear-Quadratic problem in which the optimizing variables are the constants a and b. (I have lectured in class extensively on how this portion of the problem can be solved and have warned everybody that this part of my lecture is important and subject to quiz). (35%) solve this problem from first principles (i.e., derive the appropriate conditions for optimum ; don't just assert them) using the LaGrangian methods (35%) solve the same problem again from first principle using dynamic programming and show that you can get the same answer. (1%) argue from intuitive grounds that the optimal b must be equal to zero. YCHo 11/2/98 3

4 (v) explain in what sense this problem captures the essential elements of the course YCHo 11/2/98 4

5 Solution: Define x(t)=x 1 (t), dx 1 /dt=x 2, dx 2 /dt= (1) Then we can let x 1 ()=a, x 2 ()=b. (2) (i) (ii) The inhomogeneous LQ problem is to choose a and b to minimize J subject to (1) and (2). Adjoin (1) to J via multiplier functions λ 1 (t) and λ 2 (t), we get J= 1 2 (z- x 1 ) 2 dt + [λ 1 (x 2 -x 1 )+ λ 2 (-x 2 )]dt integrating by parts J= [- λ 1 x 1 -λ 2 x 2 ] + [ 1 2 (z- x 1 )2 + λ 1 x 2 +λ 1 x 1 +λ 2 x 2 ]dt Taking variations on J, we get δj = [- λ 1 δx 1 -λ 2 δx 2 ] + [-(z- x1 )δx 1 +( λ 1 +λ 2 )δx 2 +λ 1 δx 1 ]dt Let us choose for convenience λ 1 = z- x 1 ; λ 1 ()= (3) then, λ 2 = - λ 1 ; λ 2 ()= (4) δj = λ 1 () δx 1 ()+ λ 2 () δx 2 ()= λ 1 () δa+ λ 2 () δb (5) For optimum a and b, we must have in addition, λ 1 ()= λ 2 ()= (6) Thus, the necessary conditions are (4),(6), and x 1 (t)=a+bt, i.e., (1). Integrating (4) using (6), we get λ 1 ()= zdt - a b2 t λ 2 ()= - zdτdt- 1 2 a2-1 6 b3 o The optimum a and b are now obtained by solving YCHo 11/2/98 5

6 a b = - o t zdt zd τdt For z(t) =sin(t), we obtain from (7 ), a=2/ and b=. (7) (iii) For the dynamic programming solution, define V(x 1,x 2,t) as the value of J when starting at t with x 1 (t) and x 2 (t), then the usual DP argument yields V(x 1,x 2,t) = V( x 1 +² x 1, x 2 +² x 2,t+²t) (z- x 1 )2 ²t ==> - V t = V x 1 x 2 +V x 2 * (z(t)- x 1 )2 (8) Try a solution of the PDE (8) with V(x 1,x 2,t)= (1/2)S 11 (t)x S 12 (t)x 1 x 2 + (1/2)S 22 (t)x α l (t)x 1 +α 2 (t)x 2 +β(t). Substituting into (8) we get, - [ 1 2 S 11 x 1 2 +S 12 x 1 x S 22 x 2 2 +α 1 x 1 +α 2 x 2 +β]=[s 11 x 1 + S 12 x 2 +α 1 ]x (z- x 1 )2 Collecting terms in x 1 2, x 2 2, x 1, x 2 and equating their coefficients, we get S 11 = -1 ; S 12 = - S 11 ; S 22 = - 2S 12 with S 11 ()=, S 12 ()=, S 22 ()= - α 1 = - z ; - α 2 = α 1 with α 2 ()=, α 1 ()= β= z2 ; β()= (9) Integrating (9), S 11 (t)=-t, S 12 (t)= 2 /2 + (1/2)t 2 - t, S 22 (t)=(1/3) 3-2 t-(1/3)t 3 +t 2, α 1 (t) = -1-cos(t), α 2 (t)=-+t+sint. Now for optimum a and b, we differentiate V(x 1 (),x 2 (),) w.r.t. a and b. we have V/b=,V/a= ==> S 11 () S 12 () a -α = 1 () S 12 () S 22 () b -α 2 () (1) Solving (1), we obtain once again b= a=2/. (iv) symmetry dictates that b=. YCHo 11/2/98 6

Suppose that we have a specific single stage dynamic system governed by the following equation:

Suppose that we have a specific single stage dynamic system governed by the following equation: Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where

More information

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University .. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if

More information

Homework Solution # 3

Homework Solution # 3 ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found

More information

Linear Quadratic Regulator (LQR) Design II

Linear Quadratic Regulator (LQR) Design II Lecture 8 Linear Quadratic Regulator LQR) Design II Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Outline Stability and Robustness properties

More information

Mathematical Methods - Lecture 9

Mathematical Methods - Lecture 9 Mathematical Methods - Lecture 9 Yuliya Tarabalka Inria Sophia-Antipolis Méditerranée, Titane team, http://www-sop.inria.fr/members/yuliya.tarabalka/ Tel.: +33 (0)4 92 38 77 09 email: yuliya.tarabalka@inria.fr

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

2.3 Calculus of variations

2.3 Calculus of variations 2.3 Calculus of variations 2.3.1 Euler-Lagrange equation The action functional S[x(t)] = L(x, ẋ, t) dt (2.3.1) which maps a curve x(t) to a number, can be expanded in a Taylor series { S[x(t) + δx(t)]

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

b) The system of ODE s d x = v(x) in U. (2) dt

b) The system of ODE s d x = v(x) in U. (2) dt How to solve linear and quasilinear first order partial differential equations This text contains a sketch about how to solve linear and quasilinear first order PDEs and should prepare you for the general

More information

UNIVERSITY OF MANITOBA

UNIVERSITY OF MANITOBA Question Points Score INSTRUCTIONS TO STUDENTS: This is a 6 hour examination. No extra time will be given. No texts, notes, or other aids are permitted. There are no calculators, cellphones or electronic

More information

Partial Differential Equations Separation of Variables. 1 Partial Differential Equations and Operators

Partial Differential Equations Separation of Variables. 1 Partial Differential Equations and Operators PDE-SEP-HEAT-1 Partial Differential Equations Separation of Variables 1 Partial Differential Equations and Operators et C = C(R 2 ) be the collection of infinitely differentiable functions from the plane

More information

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables. Chapter 2 First order PDE 2.1 How and Why First order PDE appear? 2.1.1 Physical origins Conservation laws form one of the two fundamental parts of any mathematical model of Continuum Mechanics. These

More information

Learning From Data Lecture 25 The Kernel Trick

Learning From Data Lecture 25 The Kernel Trick Learning From Data Lecture 25 The Kernel Trick Learning with only inner products The Kernel M. Magdon-Ismail CSCI 400/600 recap: Large Margin is Better Controling Overfitting Non-Separable Data 0.08 random

More information

MA 201: Method of Separation of Variables Finite Vibrating String Problem Lecture - 11 MA201(2016): PDE

MA 201: Method of Separation of Variables Finite Vibrating String Problem Lecture - 11 MA201(2016): PDE MA 201: Method of Separation of Variables Finite Vibrating String Problem ecture - 11 IBVP for Vibrating string with no external forces We consider the problem in a computational domain (x,t) [0,] [0,

More information

Optimization of Linear Systems of Constrained Configuration

Optimization of Linear Systems of Constrained Configuration Optimization of Linear Systems of Constrained Configuration Antony Jameson 1 October 1968 1 Abstract For the sake of simplicity it is often desirable to restrict the number of feedbacks in a controller.

More information

Math 211. Lecture #6. Linear Equations. September 9, 2002

Math 211. Lecture #6. Linear Equations. September 9, 2002 1 Math 211 Lecture #6 Linear Equations September 9, 2002 2 Air Resistance 2 Air Resistance Acts in the direction opposite to the velocity. 2 Air Resistance Acts in the direction opposite to the velocity.

More information

Partial Differential Equations, Winter 2015

Partial Differential Equations, Winter 2015 Partial Differential Equations, Winter 215 Homework #2 Due: Thursday, February 12th, 215 1. (Chapter 2.1) Solve u xx + u xt 2u tt =, u(x, ) = φ(x), u t (x, ) = ψ(x). Hint: Factor the operator as we did

More information

MATH 2250 Final Exam Solutions

MATH 2250 Final Exam Solutions MATH 225 Final Exam Solutions Tuesday, April 29, 28, 6: 8:PM Write your name and ID number at the top of this page. Show all your work. You may refer to one double-sided sheet of notes during the exam

More information

Problem 1 Cost of an Infinite Horizon LQR

Problem 1 Cost of an Infinite Horizon LQR THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4. Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation

More information

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

More information

Lecture 4 Continuous time linear quadratic regulator

Lecture 4 Continuous time linear quadratic regulator EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon

More information

The integrating factor method (Sect. 1.1)

The integrating factor method (Sect. 1.1) The integrating factor method (Sect. 1.1) Overview of differential equations. Linear Ordinary Differential Equations. The integrating factor method. Constant coefficients. The Initial Value Problem. Overview

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004 MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical

More information

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function: Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Hamilton-Jacobi-Bellman Equation Feb 25, 2008 Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS

More information

SWEEP METHOD IN ANALYSIS OPTIMAL CONTROL FOR RENDEZ-VOUS PROBLEMS

SWEEP METHOD IN ANALYSIS OPTIMAL CONTROL FOR RENDEZ-VOUS PROBLEMS J. Appl. Math. & Computing Vol. 23(2007), No. 1-2, pp. 243-256 Website: http://jamc.net SWEEP METHOD IN ANALYSIS OPTIMAL CONTROL FOR RENDEZ-VOUS PROBLEMS MIHAI POPESCU Abstract. This paper deals with determining

More information

Lecture 6. Foundations of LMIs in System and Control Theory

Lecture 6. Foundations of LMIs in System and Control Theory Lecture 6. Foundations of LMIs in System and Control Theory Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering May 4, 2015 1 / 22 Logistics hw5 due this Wed, May 6 do an easy

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

An Introduction to Numerical Methods for Differential Equations. Janet Peterson

An Introduction to Numerical Methods for Differential Equations. Janet Peterson An Introduction to Numerical Methods for Differential Equations Janet Peterson Fall 2015 2 Chapter 1 Introduction Differential equations arise in many disciplines such as engineering, mathematics, sciences

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Linear Systems Theory

Linear Systems Theory ME 3253 Linear Systems Theory Review Class Overview and Introduction 1. How to build dynamic system model for physical system? 2. How to analyze the dynamic system? -- Time domain -- Frequency domain (Laplace

More information

Pontryagin s maximum principle

Pontryagin s maximum principle Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 15 Heat with a source So far we considered homogeneous wave and heat equations and the associated initial value problems on the whole line, as

More information

Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x)

Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x) Output Feedback and State Feedback EL2620 Nonlinear Control Lecture 10 Exact feedback linearization Input-output linearization Lyapunov-based control design methods ẋ = f(x,u) y = h(x) Output feedback:

More information

MATH 220: PROBLEM SET 1, SOLUTIONS DUE FRIDAY, OCTOBER 2, 2015

MATH 220: PROBLEM SET 1, SOLUTIONS DUE FRIDAY, OCTOBER 2, 2015 MATH 220: PROBLEM SET 1, SOLUTIONS DUE FRIDAY, OCTOBER 2, 2015 Problem 1 Classify the following PDEs by degree of non-linearity (linear, semilinear, quasilinear, fully nonlinear: (1 (cos x u x + u y =

More information

The second-order 1D wave equation

The second-order 1D wave equation C The second-order D wave equation C. Homogeneous wave equation with constant speed The simplest form of the second-order wave equation is given by: x 2 = Like the first-order wave equation, it responds

More information

The Relativistic Heat Equation

The Relativistic Heat Equation Maximum Principles and Behavior near Absolute Zero Washington University in St. Louis ARTU meeting March 28, 2014 The Heat Equation The heat equation is the standard model for diffusion and heat flow,

More information

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs AM 205: lecture 14 Last time: Boundary value problems Today: Numerical solution of PDEs ODE BVPs A more general approach is to formulate a coupled system of equations for the BVP based on a finite difference

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

MA 201, Mathematics III, July-November 2016, Partial Differential Equations: 1D wave equation (contd.) and 1D heat conduction equation

MA 201, Mathematics III, July-November 2016, Partial Differential Equations: 1D wave equation (contd.) and 1D heat conduction equation MA 201, Mathematics III, July-November 2016, Partial Differential Equations: 1D wave equation (contd.) and 1D heat conduction equation Lecture 12 Lecture 12 MA 201, PDE (2016) 1 / 24 Formal Solution of

More information

Formula Sheet for Optimal Control

Formula Sheet for Optimal Control Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming

More information

2 The Linear Quadratic Regulator (LQR)

2 The Linear Quadratic Regulator (LQR) 2 The Linear Quadratic Regulator (LQR) Problem: Compute a state feedback controller u(t) = Kx(t) that stabilizes the closed loop system and minimizes J := 0 x(t) T Qx(t)+u(t) T Ru(t)dt where x and u are

More information

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67 CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

MATH 425, HOMEWORK 5, SOLUTIONS

MATH 425, HOMEWORK 5, SOLUTIONS MATH 425, HOMEWORK 5, SOLUTIONS Exercise (Uniqueness for the heat equation on R) Suppose that the functions u, u 2 : R x R t R solve: t u k 2 xu = 0, x R, t > 0 u (x, 0) = φ(x), x R and t u 2 k 2 xu 2

More information

Switching, sparse and averaged control

Switching, sparse and averaged control Switching, sparse and averaged control Enrique Zuazua Ikerbasque & BCAM Basque Center for Applied Mathematics Bilbao - Basque Country- Spain zuazua@bcamath.org http://www.bcamath.org/zuazua/ WG-BCAM, February

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive

More information

Linear Differential Equations. Problems

Linear Differential Equations. Problems Chapter 1 Linear Differential Equations. Problems 1.1 Introduction 1.1.1 Show that the function ϕ : R R, given by the expression ϕ(t) = 2e 3t for all t R, is a solution of the Initial Value Problem x =

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Spring 2017 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u =

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Stochastic optimal control theory

Stochastic optimal control theory Stochastic optimal control theory Bert Kappen SNN Radboud University Nijmegen the Netherlands July 5, 2008 Bert Kappen Introduction Optimal control theory: Optimize sum of a path cost and end cost. Result

More information

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Lecture 2: Discrete-time Linear Quadratic Optimal Control ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon

More information

Linear Quadratic Regulator (LQR) II

Linear Quadratic Regulator (LQR) II Optimal Control, Guidance and Estimation Lecture 11 Linear Quadratic Regulator (LQR) II Pro. Radhakant Padhi Dept. o Aerospace Engineering Indian Institute o Science - Bangalore Outline Summary o LQR design

More information

Alexei F. Cheviakov. University of Saskatchewan, Saskatoon, Canada. INPL seminar June 09, 2011

Alexei F. Cheviakov. University of Saskatchewan, Saskatoon, Canada. INPL seminar June 09, 2011 Direct Method of Construction of Conservation Laws for Nonlinear Differential Equations, its Relation with Noether s Theorem, Applications, and Symbolic Software Alexei F. Cheviakov University of Saskatchewan,

More information

16. Working with the Langevin and Fokker-Planck equations

16. Working with the Langevin and Fokker-Planck equations 16. Working with the Langevin and Fokker-Planck equations In the preceding Lecture, we have shown that given a Langevin equation (LE), it is possible to write down an equivalent Fokker-Planck equation

More information

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008 ECE504: Lecture 8 D. Richard Brown III Worcester Polytechnic Institute 28-Oct-2008 Worcester Polytechnic Institute D. Richard Brown III 28-Oct-2008 1 / 30 Lecture 8 Major Topics ECE504: Lecture 8 We are

More information

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

Resolvent Estimates and Quantification of Nonlinear Stability

Resolvent Estimates and Quantification of Nonlinear Stability Resolvent Estimates and Quantification of Nonlinear Stability Heinz Otto Kreiss Department of Mathematics, UCLA, Los Angeles, CA 995 Jens Lorenz Department of Mathematics and Statistics, UNM, Albuquerque,

More information

Stochastic and Adaptive Optimal Control

Stochastic and Adaptive Optimal Control Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal

More information

A differential Lyapunov framework for contraction analysis

A differential Lyapunov framework for contraction analysis A differential Lyapunov framework for contraction analysis F. Forni, R. Sepulchre University of Liège Reykjavik, July 19th, 2013 Outline Incremental stability d(x 1 (t),x 2 (t)) 0 Differential Lyapunov

More information

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait Prof. Krstic Nonlinear Systems MAE28A Homework set Linearization & phase portrait. For each of the following systems, find all equilibrium points and determine the type of each isolated equilibrium. Use

More information

Static and Dynamic Optimization (42111)

Static and Dynamic Optimization (42111) Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH

UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH College of Informatics and Electronics END OF SEMESTER ASSESSMENT PAPER MODULE CODE: MS425 SEMESTER: Autumn 25/6 MODULE TITLE: Applied Analysis DURATION OF EXAMINATION:

More information

Week 4 Lectures, Math 6451, Tanveer

Week 4 Lectures, Math 6451, Tanveer 1 Diffusion in n ecall that for scalar x, Week 4 Lectures, Math 6451, Tanveer S(x,t) = 1 exp [ x2 4πκt is a special solution to 1-D heat equation with properties S(x,t)dx = 1 for t >, and yet lim t +S(x,t)

More information

MATH 425, HOMEWORK 3 SOLUTIONS

MATH 425, HOMEWORK 3 SOLUTIONS MATH 425, HOMEWORK 3 SOLUTIONS Exercise. (The differentiation property of the heat equation In this exercise, we will use the fact that the derivative of a solution to the heat equation again solves the

More information

Analysis III (BAUG) Assignment 3 Prof. Dr. Alessandro Sisto Due 13th October 2017

Analysis III (BAUG) Assignment 3 Prof. Dr. Alessandro Sisto Due 13th October 2017 Analysis III (BAUG Assignment 3 Prof. Dr. Alessandro Sisto Due 13th October 2017 Question 1 et a 0,..., a n be constants. Consider the function. Show that a 0 = 1 0 φ(xdx. φ(x = a 0 + Since the integral

More information

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of echnology c Chapter 8 Stabilization:

More information

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

CDS 110b: Lecture 2-1 Linear Quadratic Regulators CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,

More information

Pilot Waves and the wave function

Pilot Waves and the wave function Announcements: Pilot Waves and the wave function No class next Friday, Oct. 18! Important Lecture in how wave packets work. Material is not in the book. Today we will define the wave function and see how

More information

Introduction to Seismology

Introduction to Seismology 1.510 Introduction to Seismology Lecture 5 Feb., 005 1 Introduction At previous lectures, we derived the equation of motion (λ + µ) ( u(x, t)) µ ( u(x, t)) = ρ u(x, t) (1) t This equation of motion can

More information

Advanced Control Theory

Advanced Control Theory State Space Solution and Realization chibum@seoultech.ac.kr Outline State space solution 2 Solution of state-space equations x t = Ax t + Bu t First, recall results for scalar equation: x t = a x t + b

More information

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous

More information

Continuous Time Finance

Continuous Time Finance Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale

More information

MATH 220: MIDTERM OCTOBER 29, 2015

MATH 220: MIDTERM OCTOBER 29, 2015 MATH 22: MIDTERM OCTOBER 29, 25 This is a closed book, closed notes, no electronic devices exam. There are 5 problems. Solve Problems -3 and one of Problems 4 and 5. Write your solutions to problems and

More information

Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers

Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers 2 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 2 WeA22. Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers Christos Gavriel

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

10 Transfer Matrix Models

10 Transfer Matrix Models MIT EECS 6.241 (FALL 26) LECTURE NOTES BY A. MEGRETSKI 1 Transfer Matrix Models So far, transfer matrices were introduced for finite order state space LTI models, in which case they serve as an important

More information

The Necessity of the Transversality Condition at Infinity: A (Very) Special Case

The Necessity of the Transversality Condition at Infinity: A (Very) Special Case The Necessity of the Transversality Condition at Infinity: A (Very) Special Case Peter Ireland ECON 772001 - Math for Economists Boston College, Department of Economics Fall 2017 Consider a discrete-time,

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016 ACM/CMS 17 Linear Analysis & Applications Fall 216 Assignment 4: Linear ODEs and Control Theory Due: 5th December 216 Introduction Systems of ordinary differential equations (ODEs) can be used to describe

More information

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. ISS0031 Modeling and Identification Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. Aleksei Tepljakov, Ph.D. September 30, 2015 Linear Dynamic Systems Definition

More information

Math 201 Assignment #11

Math 201 Assignment #11 Math 21 Assignment #11 Problem 1 (1.5 2) Find a formal solution to the given initial-boundary value problem. = 2 u x, < x < π, t > 2 u(, t) = u(π, t) =, t > u(x, ) = x 2, < x < π Problem 2 (1.5 5) Find

More information

Ordinary differential equations. Phys 420/580 Lecture 8

Ordinary differential equations. Phys 420/580 Lecture 8 Ordinary differential equations Phys 420/580 Lecture 8 Most physical laws are expressed as differential equations These come in three flavours: initial-value problems boundary-value problems eigenvalue

More information

Solving First Order PDEs

Solving First Order PDEs Solving Ryan C. Trinity University Partial Differential Equations Lecture 2 Solving the transport equation Goal: Determine every function u(x, t) that solves u t +v u x = 0, where v is a fixed constant.

More information

Production and Relative Consumption

Production and Relative Consumption Mean Field Growth Modeling with Cobb-Douglas Production and Relative Consumption School of Mathematics and Statistics Carleton University Ottawa, Canada Mean Field Games and Related Topics III Henri Poincaré

More information

Lyapunov Stability Analysis: Open Loop

Lyapunov Stability Analysis: Open Loop Copyright F.L. Lewis 008 All rights reserved Updated: hursday, August 8, 008 Lyapunov Stability Analysis: Open Loop We know that the stability of linear time-invariant (LI) dynamical systems can be determined

More information

Dr. Allen Back. Sep. 10, 2014

Dr. Allen Back. Sep. 10, 2014 Dr. Allen Back Sep. 10, 2014 The chain rule in multivariable calculus is in some ways very simple. But it can lead to extremely intricate sorts of relationships (try thermodynamics in physical chemistry...

More information

Numerics and Control of PDEs Lecture 7. IFCAM IISc Bangalore. Feedback stabilization of a 1D nonlinear model

Numerics and Control of PDEs Lecture 7. IFCAM IISc Bangalore. Feedback stabilization of a 1D nonlinear model 1/3 Numerics and Control of PDEs Lecture 7 IFCAM IISc Bangalore July August, 13 Feedback stabilization of a 1D nonlinear model Mythily R., Praveen C., Jean-Pierre R. /3 Plan of lecture 7 1. The nonlinear

More information

Computing High Frequency Waves By the Level Set Method

Computing High Frequency Waves By the Level Set Method Computing High Frequency Waves By the Level Set Method Hailiang Liu Department of Mathematics Iowa State University Collaborators: Li-Tien Cheng (UCSD), Stanley Osher (UCLA) Shi Jin (UW-Madison), Richard

More information

Linear algebra and differential equations (Math 54): Lecture 19

Linear algebra and differential equations (Math 54): Lecture 19 Linear algebra and differential equations (Math 54): Lecture 19 Vivek Shende April 5, 2016 Hello and welcome to class! Previously We have discussed linear algebra. This time We start studying differential

More information

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization Plan of the Lecture Review: control, feedback, etc Today s topic: state-space models of systems; linearization Goal: a general framework that encompasses all examples of interest Once we have mastered

More information