POSITIVE CONTROLS. P.O. Box 513. Fax: , Regulator problem."

Size: px
Start display at page:

Download "POSITIVE CONTROLS. P.O. Box 513. Fax: , Regulator problem.""

Transcription

1 LINEAR QUADRATIC REGULATOR PROBLEM WITH POSITIVE CONTROLS W.P.M.H. Heemels, S.J.L. van Eijndhoven y, A.A. Stoorvogel y Technical University of Eindhoven y Dept. of Electrical Engineering Dept. of Mathematics and Computing Science P.O. Box MB Eindhoven Fax: , w.p.m.h.heemels@ele.tue.nl Keywords: Optimal control, linear systems, stability. Abstract In this paper, the Linear Quadratic Regulator Problem with a positivity constraint on the admissible control set is addressed. Necessary and sucient conditions for optimality are presented in terms of inner products, projections on closed convex sets, Pontryagin's maximum principle and dynamic programming. Sucient and sometimes necessary conditions for the existence of positive stabilizing controls are incorporated. Convergence properties between the nite and innite horizon case are presented. Besides these analytical methods, we describe briey a method for the approximation of the optimal controls for the nite and innite horizon problem. Introduction In literature (e.g. []), the Linear Quadratic Regulator Problem has been solved by using Riccati equations. In this article, the same problem will be treated with an additional positivity constraint: we require all components of the control to stay positive. In many real-life problems, the inuence we haveonthe system can be used only in one direction. One example is the controlling of the temperature in a room: the heating element can only put energy into the room, but it cannot extract energy. In process industry, the ow of a certain uid or gas into a reactor is regulated by closing or opening avalve, resulting in one-way streams. Other examples of positive inputs can be found in economical systems, where quantities like investments and taxes are always positive. Mainly, there are two classical approaches in optimal control theory. The rst approach is the maximum principle, initiated by Pontryagin et al. ([3]). This original maximum principle has been used and extended by many others. See for instance, [0], [4], [], [8], to mention a few. We will use the convergence results between the nite and innite horizon problem to derive a maximum principle on innite horizon for the \positive Linear Quadratic Regulator problem." The second approach, Dynamic Programming, was originally conceived by R. Bellman ([]) as a fruitful numerical method to compute optimal controls in discrete time processes. Later, people realized that the same ideas can be used for optimal control problems in continuous time. For continuous time problems, Dynamic Programming leads to a partial dierential equation, the so-called Hamilton- Jacobi-Bellman equation, which has the value function among its solutions ([4]). In a lot of problems, the value function does not behave smoothly, which causes analytical and numerical problems. In case of nonsmoothness, the value function does not satisfy the HJB-equation in a classical sense, but in a more general and more complicated way: it are so-called \viscosity solutions" (see [5]). In this paper, it will be shown that the value function in our problem is continuously dierentiable and satises the HJB-equation in a classical way. This simplies Dynamic Programming in both analytical and numerical aspects. In the formulation of the innite horizon problem, we have to impose some boundedness requirements on the state trajectory: we allow only positive controls that result in square integrable state trajectories. This leads to the introduction of the concept of positive stabilizability. In [7] and [3], the problem of controllability of linear systems with positive controls is addressed. The result of these papers is used to derive sucient and sometimes also necessary conditions for the existence of positive stabilizing controls. Problem Formulation We consider a linear system with input or control u : [t; T ]! IR m, state x :[t; T ]! IR n and output z :[t; T ]! IR p, given by

2 _x = Ax + Bu () z = Cx+ Du; () where A; B; C and D are matrices of appropriate dimensions and T is a xed end time. For every input function u L [t; T ] m (Lebesgue space of square integrable functions) and initial condition (t; x 0 ), i.e. x(t) = x 0, the solution of () is an absolutely continuous state trajectory, denoted by x t;x0;u. The corresponding output can be written as {z } z t;x0;u(s) =Ce A(s t) x 0 + (M t;t xz 0)(s) s + Ce A(s ) Bu()d + Du(s) ; (3) t {z } (L t;t u)(s) where s [t; T ], with M t;t a bounded linear operator from IR n to L [t; T ] p and L t;t is a bounded linear operator from L [t; T ] m to L [t; T ] p. Both operators are dened in (3). The closed convex cone of positive functions in L [t; T ] m is dened by P [t; T ]:=ful [t; T ] m j u(s) a.e. on [t; T ]g; where := f IR m j i 0g. First, we consider the nite horizon case (T <). The innite horizon version (T = ) will be postponed to section 6. Our objective is to determine for every initial condition (t; x 0 ) [0;T]IR n a control input u P [t; T ], an optimal control, such that J(t; x 0 ;u):=kz t;x0;uk = km t;t x 0 + L t;t uk (4) is minimal. R kk denotes the standard L -norm, i.e. kzk T = z > (s)z(s)ds. In fact, this is a minimum-norm t problem for a closed convex cone. The optimal value of J for all considered initial conditions is described by the value function. Denition (Value function) The value function V is a function from [0;T] IR n to IR and is dened for every (t; x 0 ) [0;T]IR n by 3 Example V (t; x 0 ):= inf J(t; x 0 ;u) (5) up [t;t ] To illustrate that there is no simple connection between the optimal unconstrained LQ-feedback (u= Fx) and the optimal control with positivity constraint, we consider a pendulum given (after linearizing) by the dynamics _x = x _x = x u with u 0: it is allowed only R to push from one side. Our objective is to minimize 0 fx (s) +u (s)gds. We computed J(0;x 0 );x 0 =(;0) > for controls of the form u = g max(0;fx);g 0. The results are plotted in gure. We observe that the cost function is not optimal for g =, because u :3 performs better. However, we can compute a nonnegative control achieving even 5:08, the value indicated by the dashed line in gure. We conclude that there is no simple connection between constrained and unconstrained optimal controls. cost function values of g Figure : Cost function for various gains g. 4 Mathematical Preliminaries A starting point is the fundamental theorem in Hilbert space theory concerning the minimum distance to a convex set. For a proof, see chapter 3 in []. Theorem Let x be avector in a Hilbert space H with inner product (j)and let K be a closed convex subset of H. Then there is exactly one k 0 K such that kx k 0 k kx kk for all k K. Furthermore, a necessary and sucient condition that k 0 is the unique minimizing vector is that (x k 0 j k k 0 ) 0 for all k K. Theorem can be used to generalize orthogonal projections on closed subspaces in Hilbert spaces. Let K be a closed convex set of the Hilbert space H. We introduce the projection P K onto K as P K x = k 0 K ()kx k 0 kkx kk8kk (6) for x H. Theorem justies this denition and gives an equivalent characterisation of P K : P K x = k 0 K () (x k 0 j k k 0 ) 0 8k K: (7) We now state some simple properties of projections on closed convex sets. Lemma 3 P K = P K. kp K x P K ykkx ykfor all x; y H:

3 Besides K being closed and convex, assume it is a cone. Then P K (x) = P K x for all x H and 0. 5 Optimal Controls 5. Existence and Uniqueness A standing assumption in the remainder of the paper will be the injectivity ofd. In this case, L t;t has a bounded left inverse, as we will see. Consider a z in the range of L t;t, i.e. z = L t;t u for some u L [t; T ] m. In that case z satises _x(s) = Ax(s)+Bu(s); x(t) =0 z(s) = Cx(s)+Du(s) for s [t; T ] and we can solve u by putting u(s) = (D > D) D > fz(s) Cx(s)g. Substituting this in the differential equation gives _x(s) = (A B(D > D) D > C)x(s)+ +B(D > D) D > z(s); x(t)=0 u(s) = (D > D) D > fz(s) Cx(s)g: This is a prescription in state space representation of a bounded left inverse of L t;t. We denote this particular left inverse by ~ L t;t. Theorem 4 Let t; T be nite times with t<t and x 0 IR n. There is a unique control u t;t;x0 P [t; T ] such that km t;t x 0 + L t;t u t;t;x0 k km t;t x 0 + L t;t uk for all u P [t; T ]. A necessary and sucient condition for u? P [t; T ] to be the unique minimizing control is that (M t;t x 0 + L t;t u? jl t;t u L t;t u? ) 0 (8) for all u P [t; T ]. Proof. If we realize that our problem consists of minimizing kv M t;t x 0 k over v L t;t (P [t; T ]), the result is an easy translation of Theorem, where we use K = L t;t (P [t; T ]) and k 0 = L t;t (u? ). The optimal control, the optimal trajectory and the optimal output with initial conditions (t; x 0 ) and nal time T will be denoted by u t;t;x0, x t;t;x0 and z t;t;x0, respectively. Considering the proof above, it is not hard to see that in terms of projections, we can write u t;t;x0 = ~L t;t P ( M t;t x 0 ); (9) where P is the projection on the closed convex cone L t;t (P [t; T ]). From the last expression and Lemma 3, it is clear that for 0 and hence, u t;t;x0 = u t;t;x0 (0) V (t; x 0 )= V(t; x 0 ): () 5. Maximum Principle The results in this subsection are classical (see e.g. [3]). They are stated here, because in section 6 this theory is used to derive convergence results between nite and innite horizon optimal controls. Consider the Hamiltonian H(x; ; ) =x > A > + > B > +x > C > Cx + +x > C > D + > D > D () for (x; ; ) IR n IR m IR n. The adjoint or costate equation u; ) _ = A > C > Cx C > Du = A > C > z; (3) with terminal condition (T )=0 (4) and z = Cx+ Du. The optimal control u 0;T;x0, for shortness denoted by u opt, satises for all s [0;T] u opt (s) arg min H(x opt (s);; opt (s)); (5) where x opt is the optimal trajectory and opt the solution to the adjoint equation (3) with z equal to the optimal output z opt = z 0;T;x0.=fIR m j i 0g. Weintroduce kk D > D as the norm induced by the inner product (x j y) D > D = x > D > Dy for x; y IR n in the Hilbert space IR n.furthermore, in this Hilbert space, P denotes the projection on, which is continuous according to Lemma 3. Lemma 5 Let the nal time T>0and initial state x 0 IR n be xed. The optimal control u opt satises u opt (s) =P ( (D> D) fb > (s)+d > Cx(s)g) (6) for all s [0;T], where the functions x and are given by _x = Ax + Bu opt x(0) = x 0 _ = A > C > Cx C > Du opt (T ) = 0 (7) Hence, the optimal control is continuous in time. Proof. This lemma is a straightforward application of the theory above. By (5), u opt (s) is the unique pointwise minimizer of the costs > D > D + > g(s) = k+ (D> D) g(s)k D> D 4 g> (s)(d > D) g(s); (8)

4 taken over all, where g(s) := B > opt (s) + D > Cx opt (s). Substituting (6) in (7) gives a two-point boundary value problem. Its solutions provide us with a set of candidates containing the optimal control, because the maximum principle is a necessary condition for optimality. In case D > D = I, the expression P ( (D> D) fb > (s)+d > Cx(s)g) simplies to maxf0; B > (s) D > Cx(s)g; where \max" means taking the maximum componentwise. 5.3 Dynamic Programming In the introduction, we discussed the problems with nonsmooth value functions and announced to show that in the positive LQ-problem, the value function is continuously dierentiable and satises the Hamilton-Jacobi-Bellman equation in classical sense. The proof of this fact, being rather technical, is not included, but can be found in [6]. We start with a short overview of the technique of dynamic programming. To do so, we introduce the function L for (x; ) IR n by L(x; ) =z > z= =x > C > Cx +x > C > D + > D > D: (9) The HJB-equation is given by where ~ H is given by V t (t; x)+ ~ H(x; V x (t; x)) = 0; (0) ~H(x; p) = inf fp > Ax + p > B + L(t; x; )g () for (x; p) IR n IR n. V t means the time-derivative of the value function and V x the gradient ofv. By manipulating (), we get ~H(x; p) =x > C > Cx+ + p > Ax 4 g> (x; p)(d > D) g(x; p)+ + inf k + (D> D) g(x; p)k D> D () with g(x; p) = B > p +D > Cx and kk D > D as in the previous subsection. The minimizing equals P ( (D> D) g(x; p)), where P denotes the projection on in the Hilbert space IR n with inner product (j) D > D.IfVis continuously differentiable and the optimal controls are continuous, the value function satises the HJB-equation (see [5]). The Maximum Principle shows that the optimal controls in our problem are continuous. Theorem 6 (Dierentiability of V w.r.t. x) The value function V is continuously dierentiable with respect to x and its directional derivative in the point (t; x 0 ) [t; T ) IR n with increment h IR n is given by or, explicitly, (V x (t; x 0 ) j h) =(z t;t;x0 jm t;t h) (3) V x (t; x 0 )= Z T e A> (s t) C > z t;t;x0 (s)ds: (4) t z t;t;x0 is the output corresponding to initial condition (t; x 0 ) with optimal control u t;t;x0. Notice that in (3) the left inner product is the Euclidean inner product in IR n and the right inner product is the L -inner product. Theorem 7 (Dierentiability of V w.r.t. t) The value function V is dierentiable with respect to t and the partial derivative in the point (t; x 0 ) [t; T )IR n equals V t (t; x 0 )= (V x (t; x 0 ) j Ax + Bu t;t;x0 (t)) + z > t;t;x 0 (t)z t;t;x0 (t) (5) In [6], it is shown also that both partial derivatives are continuous in both arguments. We can conclude now that V is a classical solution to the HJB-equation with the boundary condition V (T;x) = 0 for all x IR n : The socalled verication theorem in [5] gives now that the optimal control u t;t;x0 is the pointwise minimizer of () along the optimal trajectory, i.e. u t;t;x0 (s) = P ( (D> D) g(x t;t;x0 (s);v x (s; x t;t;x0 (s)))); (6) This denes a time-varying optimal feedback. In fact, by using (6), we see that (5) is the HJB-equation. Comparing (4) with the adjoint equation of the Maximum Principle yields a connection between the adjoint variable and the gradient of V: t;t;x0 (s) =V x (s; x t;t;x0 (s)); (7) where t;t;x0 is the solution to the adjoint equation corresponding to u t;t;x0. Moreover, we can even conclude that both dynamic programming and the maximum principle are necessary and sucient conditions for optimality. This follows from the link (7) between the two methods. Necessity of the maximum principle in terms of Lemma 5 translates via (7) into necessity of dynamic programming in terms of (6). Vice versa, the suciency of dynamic programming leads to suciency of the maximum principle along the same lines.

5 6 Innite Horizon 6. Problem Formulation In this section we consider the innite horizon case (T = ). The time-invariance of the problem guarantees that we can take t = 0 without loss of generality. The problem is to minimize Z J(x 0 ;u):= z > x 0;u(s)z x0 ;u(s)ds (8) 0 over u P [0; ) subject to the relations (), (), x(0) = x 0 and the additional constraint that the state trajectory x should be contained in L [0; ) n. Notice that x L [0; ) n implies x(s)! 0(s!), because x is necessarily absolutely continuous. Denition 8 A control is said to be stabilizing for initial state x 0, if the corresponding state trajectory x satises x L [0; ) n. (A; B) is said to be positive stabilizable, if for every x 0 there exists a stabilizing control u P [0; ). Notice that u L [0; ) m and x L [0; ) n imply z L [0; ) p and thus the niteness of J. So the positive stabilizability of(a; B) guarantees the existence of positive controls, which keep the cost function nite. This is necessary to formulate a descent innite horizon problem. 6. Positive Stabilizability In this subsection we present sucient conditions for the positive stabilizability of(a; B). In the single input case (m = ), they turn out to be sucient and necessary. The main results are stated without proofs. The proofs can be found in [6]. Theorem 9 Consider the system given by (A; B; C; D) with control restraint set =f( ;::: ; m ) > IR m j i 0 ;8 i=;::: ;m g: If (A; B) is stabilizable and all real eigenvectors v of A > corresponding to a nonnegative eigenvalue of A > have the property that B > v has positive components, then (A; B) is positively stabilizable. Notice that a real eigenvector v of A > corresponding to a nonnegative eigenvalue B > v should also have negative components, because v is an eigenvector too. For m = the conditions are also necessary. Corollary Consider the system (A; B; C; D) with m = and control restraint set =firj0g=[0;): (A; B) is positively stabilizable if and only if (A; B) stabilizable and (A) \ [0; )=;. 6.3 Connection between Finite and Innite Horizon In this subsection, we present only the results. The proofs are rather technical and therefore, omitted. However, they can be found in [6]. In the remainder of this paper, we concentrate on systems with the following properties.. (A; B) is positively stabilizable;. D is injective; 3. (A; B; C; D) is minimum phase. Items. and 3. can be replaced by item 3. and (C + DF ;A+BF ) is detectable, where F is given by (C+ DF ) > D =0. Under these assumptions, the optimal controls with in- nite horizon exist and are unique. We dene the operators T : L [0; ) m! L [0;T] m for s [0;T]by( T u)(s) =u(s) and T : L [0;T] m! L [0; ) m for all u L m[0;t]by ( T u)(s) = u(s) st 0 s>t In the sequel, we denote by u T ;x T and z T the optimal control,the optimal state trajectory and the optimal output on [0;T] with xed initial condition (0;x 0 ) for T>0 or T =. Theorem 0 For T!, we have the convergences T u T! u; T z T! z and T x T! x in the L -norm. Also the optimal costs of the nite horizon converge to the optimal costs of the innite horizon, if the horizon tends to innity. Pontryagin's Maximum Principle for the nite horizon converges to a maximum principle for the innite horizon, which implies the continuity of the optimal control on innite horizon. Theorem Fix initial state x 0. The corresponding optimal control, denoted by u, satises u(s) =P ((D > D) f B> (s) D > Cx(s)g); (9) where the continuous function L [0; ) n is initially given by (0) and the dierential equation _ = A > C > z: (30) (0) is the limit of a converging sequence f Ti (0)g iin, where ft i g iin is a sequence of positive numbers, which tend to innity. By using the maximum principle in nite and innite horizon, we can extend the convergence results of Theorem 0 to pointwise convergence.

6 Theorem for all s [0; ). 6.4 Approximation u T (s)! u(s) (T!); It is obvious that for all s 0 u t;x0 (s + t) = u x0 (s). From this it follows that that there exists a time-invariant optimal feedback u fdb dened by u fdb (x 0 ):=u t;x0 (t) =u 0;x0 (0) = lim T! u 0;T;x 0 (0): (3) The limit in this equation provides us with a method to approximate the optimal feedback. Discretization in time and state is a method to approximate the nite horizon optimal controller. This technique is based on discrete Dynamic Programming, as initiated by R. Bellman ([]). Such techniques can be found in e.g. [9] and [8]. If T is large enough we can use u 0;T;x0 as an approximation for u fdb (x 0 ). In [6], some numerical result are presented. 7 Conclusions In this paper, we addressed three approaches to Linear Quadratic Regulator problem with a positivity constraint on the admissible controls. As main contributions, we consider the necessary and sucient conditions in terms of inner products, projections, the maximum principle and dynamic programming. The maximum principle results in atwo-point boundary value problem, dynamic programming in a partial dierential equation. If one of these equations are solved, their solutions lead analytically to the optimal control. The conditions for positive stability are veried easily and can be used to check the wellposedness of the innite horizon problem. Another essential result is the maximum principle for the innite horizon case and the convergence results between nite horizon optimal controls and the innite horizon control which result in a method to approximate the optimal positive feedback in the considered problem. The results hold for LQ-problems with arbitrary closed convex cones as control restraint set as well. [4] Feichtinger G., Hartl R.F., \ Okonomischer Prozesse: Anwendung des Maximumprinzips in den Wirtschaftswissenschaften," De Gruyter, (986). [5] Fleming W.H., Soner H.M., \Controlled Markov Processes and Viscosity Solutions", Springer-Verlag, (993). [6] Heemels W.P.M.H., \Optimal Positive Control & Optimal Control with Positive State Entries", Master's thesis Technical University of Eindhoven, (995). [7] Heymann M., Stern R.J., \Controllability of Linear Systems with Positive Controls: Geometric Considerations", Journal of Mathematical Analysis and Applications, 5, pages 36-4, (975). [8] Kirk D.E., \Optimal Control Theory, An Introduction", Prentice-Hall Inc., (970). [9] Kushner H.J., and Dupuis P.G., \Numerical Methods for Stochastic Control Problems in Continuous Time", Springer-Verlag, (99). [0] Lee E.B., Markus L., \Foundations of Optimal Control Theory", John Wiley & Sons, Inc., (967). [] Luenberger D.G., \Optimization by Vector Space Methods." John Wiley & Sons, Inc., (969). [] Macki J., Strauss A., \Introduction to Optimal Control Theory", Springer-Verlag Inc., (98). [3] Pontryagin L.S., Boltyanskii V.G., Gamkrelidze R.V., Mishchenko E.F., \The Mathematical Theory of Optimal Processes." Interscience Publishers, John Wiley & Sons, (96). [4] Fleming W.H., Rishel R.W., \Deterministic and Stochastic Optimal Control", Springer-Verlag, (975). References [] Anderson B.D.O., Moore J.B., \Optimal Control - Linear Quadratic Methods", Prentice Hall, (990). [] Bellman R., \Introduction to the Mathematical Theory of Control Processes", Academic Press, (967). [3] Brammer R.F., \Controllability in Linear Autonomous Systems with Positive Controllers", Siam J. of Control, 0-, (97).

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory Constrained Innite-Time Nonlinear Quadratic Optimal Control V. Manousiouthakis D. Chmielewski Chemical Engineering Department UCLA 1998 AIChE Annual Meeting Outline Unconstrained Innite-Time Nonlinear

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

Carnegie Mellon University Forbes Ave. Pittsburgh, PA 15213, USA. fmunos, leemon, V (x)ln + max. cost functional [3].

Carnegie Mellon University Forbes Ave. Pittsburgh, PA 15213, USA. fmunos, leemon, V (x)ln + max. cost functional [3]. Gradient Descent Approaches to Neural-Net-Based Solutions of the Hamilton-Jacobi-Bellman Equation Remi Munos, Leemon C. Baird and Andrew W. Moore Robotics Institute and Computer Science Department, Carnegie

More information

A Proof of the EOQ Formula Using Quasi-Variational. Inequalities. March 19, Abstract

A Proof of the EOQ Formula Using Quasi-Variational. Inequalities. March 19, Abstract A Proof of the EOQ Formula Using Quasi-Variational Inequalities Dir Beyer y and Suresh P. Sethi z March, 8 Abstract In this paper, we use quasi-variational inequalities to provide a rigorous proof of the

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

A quasi-separation theorem for LQG optimal control with IQ constraints Andrew E.B. Lim y & John B. Moore z February 1997 Abstract We consider the dete

A quasi-separation theorem for LQG optimal control with IQ constraints Andrew E.B. Lim y & John B. Moore z February 1997 Abstract We consider the dete A quasi-separation theorem for LQG optimal control with IQ constraints Andrew E.B. Lim y & John B. Moore z February 1997 Abstract We consider the deterministic, the full observation and the partial observation

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

H 1 optimisation. Is hoped that the practical advantages of receding horizon control might be combined with the robustness advantages of H 1 control.

H 1 optimisation. Is hoped that the practical advantages of receding horizon control might be combined with the robustness advantages of H 1 control. A game theoretic approach to moving horizon control Sanjay Lall and Keith Glover Abstract A control law is constructed for a linear time varying system by solving a two player zero sum dierential game

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko Continuous-time optimization involves maximization wrt to an innite dimensional object, an entire

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r Intervalwise Receding Horizon H 1 -Tracking Control for Discrete Linear Periodic Systems Ki Baek Kim, Jae-Won Lee, Young Il. Lee, and Wook Hyun Kwon School of Electrical Engineering Seoul National University,

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems. Innite Dimensional Quadratic Optimization: Interior-Point Methods and Control Applications January,995 Leonid Faybusovich John B. Moore y Department of Mathematics University of Notre Dame Mail Distribution

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

To appear in Machine Learning Journal,, 1{37 ()

To appear in Machine Learning Journal,, 1{37 () To appear in Machine Learning Journal,, 1{37 () c Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Study of Reinforcement Learning in the Continuous Case by the Means of Viscosity

More information

Constrained controllability of semilinear systems with delayed controls

Constrained controllability of semilinear systems with delayed controls BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 56, No. 4, 28 Constrained controllability of semilinear systems with delayed controls J. KLAMKA Institute of Control Engineering, Silesian

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory Constrained Innite-time Optimal Control Donald J. Chmielewski Chemical Engineering Department University of California Los Angeles February 23, 2000 Stochastic Formulation - Min Max Formulation - UCLA

More information

Tilburg University. An equivalence result in linear-quadratic theory van den Broek, W.A.; Engwerda, Jacob; Schumacher, Hans. Published in: Automatica

Tilburg University. An equivalence result in linear-quadratic theory van den Broek, W.A.; Engwerda, Jacob; Schumacher, Hans. Published in: Automatica Tilburg University An equivalence result in linear-quadratic theory van den Broek, W.A.; Engwerda, Jacob; Schumacher, Hans Published in: Automatica Publication date: 2003 Link to publication Citation for

More information

Abstract. Previous characterizations of iss-stability are shown to generalize without change to the

Abstract. Previous characterizations of iss-stability are shown to generalize without change to the On Characterizations of Input-to-State Stability with Respect to Compact Sets Eduardo D. Sontag and Yuan Wang Department of Mathematics, Rutgers University, New Brunswick, NJ 08903, USA Department of Mathematics,

More information

Null controllable region of LTI discrete-time systems with input saturation

Null controllable region of LTI discrete-time systems with input saturation Automatica 38 (2002) 2009 2013 www.elsevier.com/locate/automatica Technical Communique Null controllable region of LTI discrete-time systems with input saturation Tingshu Hu a;, Daniel E. Miller b,liqiu

More information

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts.

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts. Adaptive linear quadratic control using policy iteration Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke@cs.umass.edu B. Erik Ydstie Department of Chemical

More information

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations,

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations, SEMI-GLOBAL RESULTS ON STABILIZATION OF LINEAR SYSTEMS WITH INPUT RATE AND MAGNITUDE SATURATIONS Trygve Lauvdal and Thor I. Fossen y Norwegian University of Science and Technology, N-7 Trondheim, NORWAY.

More information

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA Nonlinear Observer Design using Implicit System Descriptions D. von Wissel, R. Nikoukhah, S. L. Campbell y and F. Delebecque INRIA Rocquencourt, 78 Le Chesnay Cedex (France) y Dept. of Mathematics, North

More information

Nonlinear L 2 -gain analysis via a cascade

Nonlinear L 2 -gain analysis via a cascade 9th IEEE Conference on Decision and Control December -7, Hilton Atlanta Hotel, Atlanta, GA, USA Nonlinear L -gain analysis via a cascade Peter M Dower, Huan Zhang and Christopher M Kellett Abstract A nonlinear

More information

1 Introduction We study innite-dimensional systems that can formally be written in the familiar form x 0 (t) = Ax(t) + Bu(t); y(t) = Cx(t) + Du(t); t

1 Introduction We study innite-dimensional systems that can formally be written in the familiar form x 0 (t) = Ax(t) + Bu(t); y(t) = Cx(t) + Du(t); t Quadratic Optimal Control of Regular Well-Posed Linear Systems, with Applications to Parabolic Equations Olof J. Staans Abo Akademi University Department of Mathematics FIN-20500 Abo, Finland Olof.Staans@abo.

More information

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS D. Limon, J.M. Gomes da Silva Jr., T. Alamo and E.F. Camacho Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla Camino de los Descubrimientos

More information

On the Bellman equation for control problems with exit times and unbounded cost functionals 1

On the Bellman equation for control problems with exit times and unbounded cost functionals 1 On the Bellman equation for control problems with exit times and unbounded cost functionals 1 Michael Malisoff Department of Mathematics, Hill Center-Busch Campus Rutgers University, 11 Frelinghuysen Road

More information

Comments on integral variants of ISS 1

Comments on integral variants of ISS 1 Systems & Control Letters 34 (1998) 93 1 Comments on integral variants of ISS 1 Eduardo D. Sontag Department of Mathematics, Rutgers University, Piscataway, NJ 8854-819, USA Received 2 June 1997; received

More information

Game Theory Extra Lecture 1 (BoB)

Game Theory Extra Lecture 1 (BoB) Game Theory 2014 Extra Lecture 1 (BoB) Differential games Tools from optimal control Dynamic programming Hamilton-Jacobi-Bellman-Isaacs equation Zerosum linear quadratic games and H control Baser/Olsder,

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department

More information

tion. For example, we shall write _x = f(x x d ) instead of _x(t) = f(x(t) x d (t)) and x d () instead of x d (t)(). The notation jj is used to denote

tion. For example, we shall write _x = f(x x d ) instead of _x(t) = f(x(t) x d (t)) and x d () instead of x d (t)(). The notation jj is used to denote Extension of control Lyapunov functions to time-delay systems Mrdjan Jankovic Ford Research Laboratory P.O. Box 53, MD 36 SRL Dearborn, MI 4811 e-mail: mjankov1@ford.com Abstract The concept of control

More information

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints S. Lopes, F. A. C. C. Fontes and M. d. R. de Pinho Officina Mathematica report, April 4, 27 Abstract Standard

More information

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione, Univ. di Roma Tor Vergata, via di Tor Vergata 11,

More information

Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL

Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL Novel Approach to Analysis of Nonlinear Recursions G.Berkolaiko 1 2, S. Rabinovich 1,S.Havlin 1 1 Department of Physics, Bar-Ilan University, 529 Ramat-Gan, ISRAEL 2 Department of Mathematics, Voronezh

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

Decentralized control with input saturation

Decentralized control with input saturation Decentralized control with input saturation Ciprian Deliu Faculty of Mathematics and Computer Science Technical University Eindhoven Eindhoven, The Netherlands November 2006 Decentralized control with

More information

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Nonlinear Analysis 46 (001) 853 868 www.elsevier.com/locate/na Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Yunbin Zhao a;, Defeng Sun

More information

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Transaction E: Industrial Engineering Vol. 16, No. 1, pp. 1{10 c Sharif University of Technology, June 2009 Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Abstract. K.

More information

J. Korean Math. Soc. 37 (2000), No. 4, pp. 593{611 STABILITY AND CONSTRAINED CONTROLLABILITY OF LINEAR CONTROL SYSTEMS IN BANACH SPACES Vu Ngoc Phat,

J. Korean Math. Soc. 37 (2000), No. 4, pp. 593{611 STABILITY AND CONSTRAINED CONTROLLABILITY OF LINEAR CONTROL SYSTEMS IN BANACH SPACES Vu Ngoc Phat, J. Korean Math. Soc. 37 (2), No. 4, pp. 593{611 STABILITY AND CONSTRAINED CONTROLLABILITY OF LINEAR CONTROL SYSTEMS IN BANACH SPACES Vu Ngoc Phat, Jong Yeoul Park, and Il Hyo Jung Abstract. For linear

More information

Osinga, HM., & Hauser, J. (2005). The geometry of the solution set of nonlinear optimal control problems.

Osinga, HM., & Hauser, J. (2005). The geometry of the solution set of nonlinear optimal control problems. Osinga, HM., & Hauser, J. (2005). The geometry of the solution set of nonlinear optimal control problems. Early version, also known as pre-print Link to publication record in Explore Bristol Research PDF-document

More information

1. Introduction and preliminaries

1. Introduction and preliminaries Quasigroups and Related Systems 23 (2015), 283 295 The categories of actions of a dcpo-monoid on directed complete posets Mojgan Mahmoudi and Halimeh Moghbeli-Damaneh Abstract. In this paper, some categorical

More information

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function: Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic

More information

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for Frames and pseudo-inverses. Ole Christensen 3 October 20, 1994 Abstract We point out some connections between the existing theories for frames and pseudo-inverses. In particular, using the pseudo-inverse

More information

1 Solutions to selected problems

1 Solutions to selected problems 1 Solutions to selected problems 1. Let A B R n. Show that int A int B but in general bd A bd B. Solution. Let x int A. Then there is ɛ > 0 such that B ɛ (x) A B. This shows x int B. If A = [0, 1] and

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game

An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game Marzieh Khakestari (Corresponding author) Institute For Mathematical Research, Universiti Putra Malaysia, 43400

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

IEOR 265 Lecture 14 (Robust) Linear Tube MPC

IEOR 265 Lecture 14 (Robust) Linear Tube MPC IEOR 265 Lecture 14 (Robust) Linear Tube MPC 1 LTI System with Uncertainty Suppose we have an LTI system in discrete time with disturbance: x n+1 = Ax n + Bu n + d n, where d n W for a bounded polytope

More information

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi Principal Invariants of Jacobi Curves Andrei Agrachev 1 and Igor Zelenko 2 1 S.I.S.S.A., Via Beirut 2-4, 34013 Trieste, Italy and Steklov Mathematical Institute, ul. Gubkina 8, 117966 Moscow, Russia; email:

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Pontryagin s maximum principle

Pontryagin s maximum principle Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin

More information

On Some Optimal Control Problems for Electric Circuits

On Some Optimal Control Problems for Electric Circuits On Some Optimal Control Problems for Electric Circuits Kristof Altmann, Simon Stingelin, and Fredi Tröltzsch October 12, 21 Abstract Some optimal control problems for linear and nonlinear ordinary differential

More information

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract A Finite Element Method for an Ill-Posed Problem W. Lucht Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D-699 Halle, Germany Abstract For an ill-posed problem which has its origin

More information

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania Stochastic Dynamic Programming Jesus Fernande-Villaverde University of Pennsylvania 1 Introducing Uncertainty in Dynamic Programming Stochastic dynamic programming presents a very exible framework to handle

More information

3 Stability and Lyapunov Functions

3 Stability and Lyapunov Functions CDS140a Nonlinear Systems: Local Theory 02/01/2011 3 Stability and Lyapunov Functions 3.1 Lyapunov Stability Denition: An equilibrium point x 0 of (1) is stable if for all ɛ > 0, there exists a δ > 0 such

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

On the stability of receding horizon control with a general terminal cost

On the stability of receding horizon control with a general terminal cost On the stability of receding horizon control with a general terminal cost Ali Jadbabaie and John Hauser Abstract We study the stability and region of attraction properties of a family of receding horizon

More information

2 G. D. DASKALOPOULOS AND R. A. WENTWORTH general, is not true. Thus, unlike the case of divisors, there are situations where k?1 0 and W k?1 = ;. r;d

2 G. D. DASKALOPOULOS AND R. A. WENTWORTH general, is not true. Thus, unlike the case of divisors, there are situations where k?1 0 and W k?1 = ;. r;d ON THE BRILL-NOETHER PROBLEM FOR VECTOR BUNDLES GEORGIOS D. DASKALOPOULOS AND RICHARD A. WENTWORTH Abstract. On an arbitrary compact Riemann surface, necessary and sucient conditions are found for the

More information

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering denghua@eng.umd.edu http://www.engr.umd.edu/~denghua/ University of Maryland

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint that the ON PORTFOLIO OPTIMIZATION UNDER \DRAWDOWN" CONSTRAINTS JAKSA CVITANIC IOANNIS KARATZAS y March 6, 994 Abstract We study the problem of portfolio optimization under the \drawdown constraint" that the wealth

More information

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS Eduardo D. Sontag SYCON - Rutgers Center for Systems and Control Department of Mathematics, Rutgers University, New Brunswick, NJ 08903

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Generalized Riccati Equations Arising in Stochastic Games

Generalized Riccati Equations Arising in Stochastic Games Generalized Riccati Equations Arising in Stochastic Games Michael McAsey a a Department of Mathematics Bradley University Peoria IL 61625 USA mcasey@bradley.edu Libin Mou b b Department of Mathematics

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

Positive Stabilization of Infinite-Dimensional Linear Systems

Positive Stabilization of Infinite-Dimensional Linear Systems Positive Stabilization of Infinite-Dimensional Linear Systems Joseph Winkin Namur Center of Complex Systems (NaXys) and Department of Mathematics, University of Namur, Belgium Joint work with Bouchra Abouzaid

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations

Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations R Anguelov 1,2, S Markov 2,, F Minani 3 1 Department of Mathematics and Applied Mathematics, University of Pretoria 2 Institute of

More information

WITH REPULSIVE SINGULAR FORCES MEIRONG ZHANG. (Communicated by Hal L. Smith) of repulsive type in the sense that G(u)! +1 as u! 0.

WITH REPULSIVE SINGULAR FORCES MEIRONG ZHANG. (Communicated by Hal L. Smith) of repulsive type in the sense that G(u)! +1 as u! 0. PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 127, Number 2, February 1999, Pages 41{47 S 2-9939(99)512-5 PERIODIC SOLUTIONS OF DAMPED DIFFERENTIAL SYSTEMS WITH REPULSIVE SINGULAR FORCES MEIRONG

More information

Review of Controllability Results of Dynamical System

Review of Controllability Results of Dynamical System IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 13, Issue 4 Ver. II (Jul. Aug. 2017), PP 01-05 www.iosrjournals.org Review of Controllability Results of Dynamical System

More information

Topic 6: Projected Dynamical Systems

Topic 6: Projected Dynamical Systems Topic 6: Projected Dynamical Systems John F. Smith Memorial Professor and Director Virtual Center for Supernetworks Isenberg School of Management University of Massachusetts Amherst, Massachusetts 01003

More information

Value Function and Optimal Trajectories for some State Constrained Control Problems

Value Function and Optimal Trajectories for some State Constrained Control Problems Value Function and Optimal Trajectories for some State Constrained Control Problems Hasnaa Zidani ENSTA ParisTech, Univ. of Paris-saclay "Control of State Constrained Dynamical Systems" Università di Padova,

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

Continuous Time Finance

Continuous Time Finance Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale

More information

On at systems behaviors and observable image representations

On at systems behaviors and observable image representations Available online at www.sciencedirect.com Systems & Control Letters 51 (2004) 51 55 www.elsevier.com/locate/sysconle On at systems behaviors and observable image representations H.L. Trentelman Institute

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

The iterative convex minorant algorithm for nonparametric estimation

The iterative convex minorant algorithm for nonparametric estimation The iterative convex minorant algorithm for nonparametric estimation Report 95-05 Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive

More information

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS Int. J. Appl. Math. Comput. Sci., 2002, Vol.2, No.2, 73 80 CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS JERZY KLAMKA Institute of Automatic Control, Silesian University of Technology ul. Akademicka 6,

More information

C.I.BYRNES,D.S.GILLIAM.I.G.LAUK O, V.I. SHUBOV We assume that the input u is given, in feedback form, as the output of a harmonic oscillator with freq

C.I.BYRNES,D.S.GILLIAM.I.G.LAUK O, V.I. SHUBOV We assume that the input u is given, in feedback form, as the output of a harmonic oscillator with freq Journal of Mathematical Systems, Estimation, and Control Vol. 8, No. 2, 1998, pp. 1{12 c 1998 Birkhauser-Boston Harmonic Forcing for Linear Distributed Parameter Systems C.I. Byrnes y D.S. Gilliam y I.G.

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

2 optimal prices the link is either underloaded or critically loaded; it is never overloaded. For the social welfare maximization problem we show that

2 optimal prices the link is either underloaded or critically loaded; it is never overloaded. For the social welfare maximization problem we show that 1 Pricing in a Large Single Link Loss System Costas A. Courcoubetis a and Martin I. Reiman b a ICS-FORTH and University of Crete Heraklion, Crete, Greece courcou@csi.forth.gr b Bell Labs, Lucent Technologies

More information

A characterization of consistency of model weights given partial information in normal linear models

A characterization of consistency of model weights given partial information in normal linear models Statistics & Probability Letters ( ) A characterization of consistency of model weights given partial information in normal linear models Hubert Wong a;, Bertrand Clare b;1 a Department of Health Care

More information

Formula Sheet for Optimal Control

Formula Sheet for Optimal Control Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming

More information

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming Mathematical Problems in Engineering Volume 2012, Article ID 674087, 14 pages doi:10.1155/2012/674087 Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

More information

Dissipativity. Outline. Motivation. Dissipative Systems. M. Sami Fadali EBME Dept., UNR

Dissipativity. Outline. Motivation. Dissipative Systems. M. Sami Fadali EBME Dept., UNR Dissipativity M. Sami Fadali EBME Dept., UNR 1 Outline Differential storage functions. QSR Dissipativity. Algebraic conditions for dissipativity. Stability of dissipative systems. Feedback Interconnections

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Invariant Sets for a Class of Hybrid Systems Mats Jirstrand Department of Electrical Engineering Linkoping University, S-581 83 Linkoping, Sweden WWW: http://www.control.isy.liu.se Email: matsj@isy.liu.se

More information