OPTIMAL control of a discrete-time dynamic system is
|
|
- Megan Gibbs
- 6 years ago
- Views:
Transcription
1 JOURNAL OF GUIDANCE, CONTROL, AND DYNAMIC Vol. 31, No. 6, November December 28 Discrete-Time ynergetic Optimal Control of Nonlinear ystems Nusawardhana,. H. Żak, and W. A. Crossley Purdue University, West Lafayette, Indiana 4797 DOI: / Discrete-time synergetic optimal control strategies are proposed for a class of nonlinear discrete-time dynamic systems, in which a special performance index is used that results in closed-form solutions of the optimal problems. Reduced-dimension aggregated variables representing a combination of the actual controlled plant variables are used to define the performance index. The control law optimizing the performance index for the respective nonlinear dynamic system is derived from the associated first-order difference equation in terms of the aggregated variables. ome connections between discrete-time synergetic control and the discrete-time linear quadratic regulator as well as discrete-time variable-structure sliding-mode controls are established. A control design procedure leading to closedloop stability of a class of nonlinear systems with matched nonlinearities is presented. For these types of systems, the discrete-time synergetic optimal control strategy for tracking problems is developed by incorporating integral action. The closed-loop stability depends upon proper construction of the aggregated variables so that the closed-loop nonlinear system on the manifold specified by the aggregated variables is asymptotically stable. An algorithm for the construction of such a stabilizing manifold is given. Finally, the results are illustrated with an example that involves a discrete-time nonlinear helicopter model. Nomenclature A = state matrix in R nn ~A = state matrix of the augmented state-space model Ax = state-dependent vector function of dimension n A x, A w, A z = state matrices associated with state vectors x, w, z a i = the ith parameter appearing in the helicopter dynamic model B = input matrix in R nm ~B = input matrix of the augmented state-space model Bx = state-dependent input matrix function of size n m B u, B f = input matrix with respect to u and f inputs F = gain matrix of state-feedback control f = state-dependent vector of nonlinear function of dimension q h = altitude variable I, I m = identity matrix, identity matrix in R mm J = numerical value of the cost functional in optimal control problems p j = imaginary number operator, 1 k = discrete time k, k f = initial and final time in the discrete-time instance L = cost-functional integrand M = manifold fx:xg of dimension n m N = weight matrix for the cross term between state and control variables in the linear quadratic regulator integrand O = zero matrix P, ~P = matrices, solutions to the Riccati equation or the algebraic Hamilton Jacobi Bellman equation Received 16 January 28; revision received 8 May 28; accepted for publication 23 May 28. Copyright 28 by the authors. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. Copies of this paper may be made for personal or internal use, on condition that the copier pay the $1. per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 1923; include the code / 8 $1. in correspondence with the CCC. Doctoral Graduate, chool of Aeronautics and Astronautics; currently Cummins Inc., Columbus, IN Member AIAA. Professor, chool of Electrical and Computer Engineering, Electrical Engineering Building, 465 Northwestern Avenue. Associate Professor, chool of Aeronautics and Astronautics, 71 West tadium Avenue. Associate Fellow AIAA Q = weight matrix for state variables in the linear quadratic regulator integrand R = weight matrix for control variables in the linear quadratic regulator integrand r = vector of reference variables of dimension p = surface/hyperplane matrix of dimension m n T = symmetric positive-definite constant matrix, design parameter matrix, T > u = input vector of dimension m V = matrix that completes the column rank of B v = input derivative vector or auxiliary control vector in helicopter model dynamics W g = matrix orthogonal to B, a left inverse of W,so that W g W I nm x = state-variable vector of dimension n y = output-variable vector of dimension p z = state-variable vector of dimension n Z, Z 1 = one-step-ahead and one-step-delay matrix operators = zero column vector = variant of discrete-time sliding-mode reaching law = design parameter in the discrete-time tracking problem = difference operator = temporary variable in the helicopter dynamic model c = collective pitch angle = solution to the polynomial matrix equation, = additive solution and subtractive solution to the polynomial matrix equation = poles or eigenvalues, x = vector in R m, an aggregated variable of x defining the manifold M = sampling time! = angular velocity I. Introduction OPTIMAL control of a discrete-time dynamic system is concerned with finding a sequence of decisions over discrete time to optimize a given objective functional. Dynamic programming (DP) offers a computational procedure based on the principle of optimality to determine the optimizing control sequence [1 6]. The optimizing control sequence in discrete-time optimal
2 1562 NUAWARDHANA, ŻAK, AND CROLEY control problems can be obtained by solving the Bellman equation. Even though DP offers a systematic procedure to compute the optimizing sequence, the computational requirements to complete this task may become prohibitively large [2,7]. One way to overcome the excessive computational complexity is to use methods that generate efficiently approximate solutions [7]. Closed-form analytical solutions to optimal control problems of discrete-time dynamic systems are attainable for linear systems with quadratic objective functionals [8 12]. Different forms of objective functionals lead to different control laws [13,14]. In this paper, we consider a special form of the objective functional that allows us to obtain closed-form analytical solutions to optimal control problems for a class of discrete-time nonlinear dynamic systems. The format of such an objective functional resembles that of the optimal control problem for continuous-time dynamic systems, leading to the socalled synergetic control strategies. The synergetic control method was recently developed by Kolesnikov et al. [15 17] and applied to engineering energy-conversion problems [18,19]. Nusawardhana et al. [2] also present a list of references on synergetic control, including those of Kolesnikov et al. [15 17]. The synergetic control approach yields an optimizing control law that is derived from the associated linear first-order ordinary differential equation rather than a nonlinear partial differential equation as in the classical continuous optimal control. In [2 22], an alternative derivation of synergetic optimal controls, from both necessary and sufficient optimality conditions, is offered and the proposed controllers are applied to aerospace problems. The term synergetics, as described by Haken in [23 26], is concerned with the spontaneous formation of macroscopic spatial, temporal, or functional structures of systems via self-organization and is associated with systems composed of many subsystems, which may be of quite different natures. One important subject of discussion in these references is the cooperation of these subsystems on macroscopic scales. The synergetics principle translates to the field of control in terms of state-variable aggregation. As an illustration, consider a dynamic system with an n-dimensional state vector x and an m-dimensional input variable. Let be a variable defined as x, where is a matrix in R mn and constitutes an aggregated or metavariable. Aggregated variables constitute macroscopic variables from the perspective of synergetics. To be consistent with one of the control objectives, the selection or the construction of macroscopic variables must avoid internal divergence. In other words, stability observed at the macroscopic level implies stability at the microscopic level. The fundamental concept in the synergetic control approach is that of an invariant manifold and the associated law governing the controlled system s trajectory convergence to the invariant manifold. In this paper, we show that synergetic control is an effective means of controlling discrete-time nonlinear systems. We add that discretetime dynamics commonly arise after time discretization of the original continuous-time system is performed to digitally simulate and/or control its dynamic behavior. However, there are dynamic systems that are inherently discrete-time. Economic systems are often represented in the discrete-time domain because control strategies for these systems are applied in discrete time (see, for example, [27] and references therein). The first objective of our presentation here is to present our derivation of the discrete-time synergetic control from both discretetime calculus of variations and the discrete-time Bellman equation (or dynamic programming) and to establish some connections with the discrete-time variable-structure sliding-mode approach. We show that discrete-time synergetic control not only satisfies the necessary condition for optimality but also satisfies the sufficient condition for optimality. We show that the discrete-time synergetic control for linear systems is the same as the discrete-time nonlinear variable-structure sliding-mode controller employing a certain reaching law. The second objective is to show that for linear timeinvariant (LTI) systems and for a special form of the performance index, the synergetic control approach yields the same controller structure as the linear quadratic regulator (LQR) approach. The third objective is to present our stability analysis of the discrete-time synergetic control and a constructive algorithm to design a stabilizing manifold. The fourth objective is to present a discretetime synergetic control-based design procedure for tracking a reference command using an integral action. The integral action is commonly used in the design of controllers to track constant or almost-constant reference signals. tability analysis of the closedloop system is performed. The final objective is to apply the proposed method to a tracking problem of a laboratory-scale helicopter model. II. Plant Model and Problem tatement We consider a class of nonlinear discrete-time dynamic systems modeled by x k1 Ax k Bx k u k ; (1) where x k 2 R n is a vector of state variables, u k 2 R m is an input vector, Ax is a state-dependent vector function of dimension n, and Bx is an n m state-dependent input matrix. We define a macrovariable k x k x k (2) where the constant matrix 2 R mn is constructed so that the square m m matrix Bx k is invertible. We wish to solve the following optimal control problem. For a given class of nonlinear systems modeled by Eq. (1), construct a control sequence u k [k 2k ; 1] that minimizes the performance index: J X1 kk > k k > k T k X1 kk > k k k1 k > T k1 k (3) where an m m symmetric positive-definite matrix T T > > is a design parameter matrix. The performance index J depends on u k through the variable k1. Note that k1 x k1 depends upon u k because k1 x k1 Ax k Bx k u k The performance index (3) has not been commonly used in the discrete-time optimal control literature. However, this novel performance index leads to new types of optimal control strategies that are expressible in a closed form, as we demonstrate in this paper. III. Controller Construction Using the Calculus of Variations Our derivation of the discrete-time synergetic control law is rooted in the calculus of variations. Continuous functional optimization in the calculus of variations is solved using the continuous Euler Lagrange equation. The well-known fundamental principle of the calculus of variations states that for a functionals to be extremized, the first variation of the functional must be equal to zero (see, for example, Theorem I on page 178 in [28], Theorem 3.1 on page 7 in [29], or pages in [3]). In the discrete-time case, we have the following theorem for the functional extremum which is adapted from page 127 in [31] (see also ec of [32]). Discussion and mathematical derivation of the discrete-time Euler Lagrange equation in these references serve as the proof to the following theorem. Theorem 1 (a necessary condition for extremum): Let J Xk f1 Lx k ; x k1 ;k kk be a functional defined on the set of functions x k that have continuous first derivatives in k ;k f and satisfy boundary conditions x x k and x f x kf. Then the necessary condition for x k to be an extremizer of the cost J is that x k satisfies the difference equation:
3 NUAWARDHANA, ŻAK, AND CROLEY k ; x k1 Lx k1 ; x k ;k 1 (4) k where the derivative k, applied to a real-valued function of many variables, is a column vector. We now apply the preceding necessary condition to the cost functional (3) k f > k k k1 k > T k1 k f > k1 k1 k k1 > T k k1 g (5) Performing simple manipulations yields 2 k 2T k1 k 2T k k1 2I 2T k T k1 T k1 (6) Rearranging terms in Eq. (6) and dividing both sides by 2, we obtain T k1 I 2T k T k1 (7) Let Z be a one-step-ahead operator defined as Z k k1 then Z 1 k k1. Using the operator Z, the difference equation (7) can be rewritten as TZ I 2TTZ 1 k (8) For Eq. (8) to hold for all values of k, the following matrix equation must also hold: I Z T 1 I 2TIZ 1 O (9) where O denotes a zero matrix. Postmultiplying Eq. (9) by Z, we obtain I Z 2 T 1 I 2TZ I O (1) This is a quadratic matrix equation that can be represented in a factored form: IZ IZ O (11) where applying the matrix version of the quadratic formula gives 1 2 T1 I 2T 1 2 T2 I 2T 2 4I T1 I 2T 1 2 T2 I 2T 2 4I 1 2 To ensure the stability of the closed-loop discrete-time system, it is desirable to obtain a stabilizing solution to Eq. (1). We choose as the solution to the polynomial matrix Eq. (1) because all eigenvalues of lie inside the unit disc. To see this, we perform manipulations: 1 2 T1 I 2T 1 2 T2 I 2T 2 4I T1 I 2T 1 2 T2 I 4T 4T 2 4I T1 I 2T 1 2 T2 I 4T4I 4I T1 I 2T 1 2 T2 I 4T T1 I 2T 1 2 T1 I 4T 1 2 < 1 2 T1 I 2T 1 2 T1 I m It is easy to verify that I m and thus 1. Because > O, we have 1 > O. Hence, O < < I. This makes ZI the stabilizing solution to the matrix polynomial Eq. (1). Hence, we employ ZI k ) k1 k (12) to derive the extremizing control law. We summarize the preceding development in the following proposition. Proposition 1: If k is a local minimizer of the functional J X1 kk > k k k1 k > T k1 k (13) on all admissible vector functions of k, then where k1 k (14) 1 2 T1 I 2T 1 2 T2 I 2T 2 4I 1 2 The vector function k that satisfies Eq. (14) is referred to as an extremizing k. Using Proposition 1, we can now state and prove the following theorem. Theorem 2 (discrete-time synergetic extremum control): For a dynamic system model, x k1 Ax k Bx k u k ; k k<1; x k x and the associated performance index J X1 kk > k k k1 k > T k1 k where k x k, x k 2 R n, and Bx k is invertible, the extremizing control law is u k Bx k 1 Ax k x k (15) Proof: By Proposition 1, the trajectory k extremizing the performance index (13) must satisfy the difference equation (14). ubstituting the system dynamics into k1 x k1 yields k1 Ax k Bx k u k ubstituting the preceding into the difference equation (14) gives Ax k Bx k u k x k (16) olving Eq. (16) for u k, we obtain Eq. (15), which is the control law that extremizes the given performance index. Note that we refer to the control law (15) as extremizing rather than minimizing to emphasize the fact that the control law (15) is derived from the necessary optimality condition [33]. However, as we will show in the next section, the control law (15) is also optimizing: in our case, minimizing. IV. Derivation of Discrete-Time ynergetic Control from Bellman s Equation Consider a general discrete-time nonlinear dynamic system modeled by x k1 fx k ; u k ; k k<1; x k x (17) where x k 2 R n. The problem of finding a control sequence fu k :k 2 k ; 1g along with its corresponding state trajectory fx k :k 2 k ; 1g to minimize the performance index Jx X1 kk Lx k ; u k (18) constitutes an infinite-horizon discrete-time optimal control problem. One way to synthesize an optimal feedback control sequence u k, k k is to employ Bellman s equation [3,8,9], which provides both necessary and sufficient conditions for the optimality
4 1564 NUAWARDHANA, ŻAK, AND CROLEY of the resulting controller. The following theorem (found in Vol. II, ec. 1.2, of [3]) states the necessary and sufficient condition for the optimality of the solution to the preceding discrete-time optimal control problem. Theorem 3 (necessary and sufficient condition for optimality): A policy u k is optimal if and only if it is a minimizer of the Bellman equation: J x k minflx k ; u k Jfx k ; u k g u k Lx k ; u k J fx k ; u k (19) Using Theorem 3, we construct a synergetic control strategy for a class of nonlinear systems. Theorem 4 (discrete-time synergetic optimal control): Consider a nonlinear system model given by and the associated performance index x k1 Ax k Bx k u k (2) J k X1 kk L k ; k f > k k k1 k > T k1 k > k1 P fx > > x k Ax k Bx k u k x k > > TAx k k Bx k u k x k Ax k Bx k u k > > PAx k Bx k u k g 2Bx k > > TAx k Bx k u k x k 2Bx > > PAx k Bx k u k TAx k Bx k u k PAx k Bx k u k Tx k T Px k1 Tx k T P k1 T k (29) Therefore, k, which minimizes the performance index (21), satisfies the first-order difference equation: k1 T P 1 T k We note that the difference equation (23) yields an asymptotically stable because all eigenvalues of T P 1 T lie inside the unit disc. To see this, note that T P 1 T I T 1 P 1 < I 1 I where L k ; k1 > k k k1 k > T k1 k (22) We use the difference equation (29) to derive an optimal control law u k. To obtain the formula for the optimal control law, we perform the following manipulations. From Eq. (29), we obtain k x k, x k 2 R n, and Bx k is invertible. The optimal feedback control law u k that minimizes the performance index (21) is obtained by solving the difference equation: k1 T P 1 T k (23) where the symmetric positive-definite P 2 R mm is the solution to the algebraic equation: P I T T > T P 1 T (24) The optimal control strategy has the form u k T PBx k 1 PAx k TAx k x k (25) and the value of the performance index is where P satisfies Eq. (24). Proof: Let J k > k P k (26) J k > k P k (27) The Bellman equation for the optimal control of dynamic system (2) with the performance index (21) takes the form > k P k minf > k k k1 k > T k1 k > k1 P k1g u k (28) To find the minimizing sequence u k, we apply the first-order necessary condition for the unconstrained minimization to the expression in the braces in Eq. (28). Note that k1 depends on u k, but k does not. Using the relations we obtain k x k and k1 x k1 Ax k Bx k u k T P k1 T k T PAx k Bx k u k Tx k T PAx k T PBx k u k Tx k Hence, the optimal control law is u k Bx k 1 T P 1 PAx k TAx k x k We next use the Bellman equation (28) to obtain the matrix P: > k P k > k k k1 k > T k1 k > k1 P k1 (3) Collecting all quadratic terms in k on the left side of Eq. (3), we obtain > k P I T k > k1 T k1 2 > k1 T k > k1 P k1 > k1 T P k1 2 > k1 T k (31) Because T P k1 T k, we can further simplify Eq. (31) to obtain > k P I T k > k1 T P k1 2 > k1 T P k1 > k1 T P k1 > k1 T k > k T> T P 1 T k (32) Rearranging terms in Eq. (32) gives the following matrix equation for P: P I T T > T P 1 T (33) Two difference equations [Eq. (14), derived from the calculus of variations, and Eq. (23), derived from the Bellman equation] are used to construct extremizing and minimizing control laws, respectively. It is not immediately obvious that these difference equations, along with their respective control laws (15) and (25), are equivalent. We now show that these difference equations are equivalent. Theorem 5: The extremizing difference equation obtained from the calculus of variations is equivalent with the minimizing difference equation obtained from the Bellman equation because
5 NUAWARDHANA, ŻAK, AND CROLEY 1565 so that T P 1 T k1 T P 1 T k k1 k that is, Eq. (23) takes the form of Eq. (14). Proof: We add T to both sides of Eq. (33) to obtain T P I 2T T > T P 1 T (34) Premultiplying both sides of Eq. (34) by T 1 gives T 1 T PT 1 I 2TT P 1 T Collecting all terms on one side yields T 1 T PT 1 I 2TT P 1 T O (35) Let ZI T 1 T P, then Z 1 I T P 1 T, and Eq. (35) can be rewritten as Z I T 1 I 2TZ 1 I O (36) Equation (36) is the same as Eq. (9) derived from the discrete Euler Lagrange equation with the stabilizing solution given by Z. Therefore, ZI T 1 T P 1. V. Relationship Between Discrete-Time ynergetic Optimal Controller and the Linear Quadratic Regulator The performance index used to derive the optimal control strategies (15) and (25) is quadratic. Optimal control of discrete-time linear time-invariant systems using a quadratic performance index has been extensively investigated in [8,9,34 36]. In this section, we establish connections between the discrete-time synergetic optimal control and the discrete-time LQR. A. Discrete-Time ynergetic Optimal Control for Linear Time- Invariant ystems We first present the discrete-time synergetic control solution for a class of LTI systems. Corollary 1 (discrete-time synergetic optimal control for LTI systems): Consider the linear discrete-time dynamic system modeled by x k1 Ax k Bu k ; k k<1; x k x (37) and the associated performance index Jx X1 kk > k k k1 k > T k1 (38) where k x k, x k 2 R n, and is constructed such that B is invertible. The optimal control minimizing the performance index (38) satisfies the difference equation: and is given by k1 T P 1 T k (39) u k T PB1 PA TA Ix k (4) The symmetric positive-definite matrix P 2 R mm is the solution to the algebraic equation: P I T T > T P 1 T (41) Proof: ubstituting A and B for Ax k and Bx k into Theorem 4 gives the preceding results. To establish connections between the preceding synergetic optimal control law and the one obtained from the LQR theory, we need the following background results. B. Background Results from the LQR Theory In this section, we investigate the optimal linear quadratic regulator problem with the cross-product term in the quadratic performance index. We derive a discrete LQR control law following the steps in [12], in which the solution to the continuous-time LQR problems with cross-product terms in the quadratic performance index can be found. Then we employ the obtained results to derive an optimal synergetic control strategy using the Bellman equation. Consider a linear discrete-time dynamic system modeled by Eq. (37) and the associated quadratic performance index: Jx k X1 kk x > k Qx k 2x > k Nu k u > k Ru k (42) where Q Q >, N 2 R nm, and R R > >. The problem of determining an input sequence u k u k (k k ) for the dynamic system (37) that minimizes Jx k is called the deterministic discretetime linear quadratic optimal regulator (DT-LQR) problem. Lemma 1 (discrete-time LQR with cross-product term in quadratic performance index): For the DT-LQR problem of dynamic system (37) with quadratic performance index Jx k given by Eq. (42), the control law u k R B> ~PB 1 N > B > ~PAx k (43) minimizes the quadratic performance index Jx k, where the positive-definite symmetric matrix ~P 2 R nn is the solution to the algebraic Riccati equation: ~P Q NA > ~PBR B > ~PB 1 N A ~PB > A > ~PA (44) The control law u k given by Eq. (43) together with ~P obtained from Eq. (44) solve the associated Bellman equation: J x k min u k fx > k Qx k 2x > k Nu k u > k Ru k J x k1 g (45) for all x k 2 R n, where J x k x > ~ k Px k. Proof: ubstituting x > ~ k Px k for Jx k and x > ~ k1 Px k1 Ax k Bu k > ~PAx k Bu k for Jx k1 in Eq. (45), then taking its derivative with respect to u k and solving the resulting algebraic equation for u k, we obtain the optimizing control law (43). ubstituting this optimizing control into Eq. (45) gives x > k ~ Px k x > k Q N A> ~PBR B > ~PB 1 N A ~PB > A > ~PAx k (46) For the equality in Eq. (46) to hold for all x k, the algebraic Riccati equation (44) must be satisfied. C. Equivalence Between ynergetic and LQR Controllers We begin by substituting the expressions for k and k1 into the quadratic performance index (38) and then eliminate x k1. After some manipulations, we represent Eq. (38) in the form where Jx k X1 kk x > k Qx k 2x > k Nu k u > k Ru k (47) Q > A I > > TA I R B > > TB N A I > > TB 9 = ; (48) By Lemma 1, the DT-LQR optimal solution for the system (37) with the cost functional (47) is given by Eq. (43), where ~P 2 R nn is the
6 1566 NUAWARDHANA, ŻAK, AND CROLEY solution to the algebraic Riccati equation (44). We next show that the discrete-time synergetic optimal control law (4) and the discretetime LQR control law (43) are equivalent optimal strategies associated with the quadratic performance index (38), expressed in terms of k, or with Eq. (47), expressed in terms of x k, respectively. Theorem 6: The synergetic controller (4) and the DT-LQR controller (43) are equivalent: that is, u k R B> ~PB 1 N A > ~PB > x k T PB 1 PA TA Ix k where ~P > P. Proof: We first show that ~P > P. The Bellman equation for the discrete-time synergetic optimal control of the LTI system model (37) with the performance index (38) is given by Eq. (38). ubstituting into Eq. (28) the expressions for k and k1 and letting Jx k x > k > Px k, we obtain x > k > Px k x > k > x k Ax k Bu k x k > > TAx k Bu k x k Ax k Bu k > PAx k Bu k (49) We rearrange the terms to obtain x > k > P A > > PA > A I > > TA Ix k 2x > k A> > PB A I > TBu k u k B> > PB B > > TBu k 2x > k A> > P A I > TBu k u k B> > P TBu k (5) ubstituting into Eq. (5) the synergetic control law (4) gives > PA > > PA > AI > > TAI A > > P AI > TT P 1 PATAI PATAI > T P 1 PATAI PATAI > BB 1 T P 1 B T B > PATAI B > > PAB > > TAI > B > > TB B > > PB 1 B > > PAB > > TAI (51) On the other hand, the solution to the DT-LQR problem of system (37) with the cost (47) is characterized by the matrix ~P, which is the solution to the Riccati equation and has the form: ~P Q A > ~PA NA > PBR B > ~PB 1 N A > PB > (52) We substitute the matrices Q, N, and R defined in Eq. (48) into Eq. (52) to obtain ~P > AI > > TA IA > ~PA A I > > TB A > ~PBB > > TB B > ~PB 1 B > > TA IB > ~PA (53) Equations (51) and (53) are equal if and only if ~P > P. Next, we show that the control laws (4) and (43) are equivalent. Using matrix relations in Eq. (48) to substitute for R and N in the DT- LQR control law, we obtain u k R B> ~PB 1 N > B > ~PAx k B > > TB B > ~PB 1 B > > TA IB > ~PAx k (54) ubstituting now > P for ~P into Eq. (54) and taking into account that B is invertible, the DT-LQR control law (54) takes the form of the synergetic control law (4). VI. Discrete-Time Variable-tructure liding-mode Control and ynergetic Control The difference equation k1 k ; < kk < 1 (55) derived in ecs. III and IV is an important equation in discrete-time synergetic control because it determines the associated discrete-time optimizing control law. Note that the relation (55) indicates that both k1 and k have the same sign. Recall that k 2 R m. Let i;k be the ith element of k. The relation (55) is equivalent to i;k1 j i;k1 j < j i;k j or i;k sign i;k < (56) i;k1 i;k sign i;k > ; where sign is the sign operator applied to each component of its vector argument. The relation (56) appears in discrete-time variablestructure sliding-mode control reaching conditions [37,38]. An overview of discrete-time sliding-mode control reaching conditions can be found in Chapter 3 of [39]. It is stated on page 931 of [38] that the control law satisfying Eq. (56), and hence Eq. (55), will guarantee that all the state trajectories will enter and remain within a domain of decreasing or, in the worst case, nonincreasing distance relative to the manifold M fx:xg. The reaching law of the discrete-time sliding-mode control in the form of Eq. (55) is known as linear reaching law [39]. It is also known as the reaching law for variable structure with -equivalent control [4]. The -equivalent-control variable-structure method yields the equivalent control from the difference equation: k1 where jj < 1. From the perspective of discrete-time synergetic control, the discrete-time sliding-mode control with the linear or reaching law is optimizing, in the sense that it minimizes the cost functional: J X1 kk > k k k1 k > T k1 k A consequence of recognizing a connection between the discretetime synergetic control and the sliding-mode control is that we can employ invariant manifold construction methods of sliding-mode variable structure. VII. Analysis of the Closed-Loop ystem Driven by the ynergetic Controller We consider the following LTI dynamic system model: x k1 Ax k Bu k which is regulated by the discrete-time synergetic optimal control. The discrete-time synergetic optimal control law u k is constructed to minimize the quadratic performance index (13), where k1 k, which can be represented as x k1 Ax k Bu k x k (57) where is given by Eq. (14); that is, all eigenvalues of lie inside the unit disc in the complex plane. Therefore, the optimizing control law satisfying Eq. (57) is u k B 1 Ax k x k (58)
7 NUAWARDHANA, ŻAK, AND CROLEY 1567 The closed-loop system dynamics are modeled as x k1 I BB 1 Ax k BB 1 x k (59) The synergetic control strategy (58) forces the system trajectory to asymptotically approach the manifold f g. Thus, the closedloop system will be asymptotically stable, provided that the system (59) restricted to the manifold, f g, is asymptotically stable. In our proof of the preceding statements, we use the following theorem that can be found on page 333 of [41]. Theorem 7: Consider the system model x k1 Ax k Bu k, where rank B m, the pair A; B is controllable, and a given set of n m complex numbers, f 1 ;...; nm g, is symmetric with respect to the real axis. Then there exist full-rank matrices W 2 R nnm and W g 2 R nmn such that W g W I nm and W g B O, and the eigenvalues of W g AW are eigw g AWf 1 ;...; nm g. Furthermore, there exists a matrix 2 R mn such that B I m and the system x k1 Ax k Bu k restricted to fx g has its poles at 1 ;...; nm. We can now state and prove the following theorem. Theorem 8: If the manifold fx g is constructed so that the system x k1 Ax k Bu k restricted to it is asymptotically stable and detb, then the closed-loop system x k1 Ax k Bu k ; u k B 1 Ax k x k is asymptotically stable. Proof: uppose that for a given set of complex numbers 1 ;...; nm for which the magnitudes are less than one, we constructed matrices 2 R mn, W g 2 R nnm, and W g 2 R nmn that satisfy the conditions of Theorem 7. Let z k z 1;k x z 2;k k (6) where z 1;k is composed of state variables that characterize the closedloop trajectories of Eq. (59) confined on the manifold k, and z 2;k comprises state variables characterizing trajectories approaching the manifold k. The coordinate transformation matrix in Eq. (6) is invertible, where W B 1 (61) and therefore W g W I nm, B I m, W g B O, and W O. The closed-loop system dynamics expressed in the z coordinates have the form z k1 x k1 I BB 1 Ax k BB 1 x k A W g BB 1 A A BB 1 A x k BB 1 BB 1 x k Because B I m, W g B O, and x k W Bz k, the preceding relation becomes z k1 A x k O x k O A W Bz k O W Bz k O AW W g AB z k O O O O O B z k AW W g AB O B z k Therefore, the closed-loop dynamic system has the following final form: z k1 AW W g AB O z k (62) The eigenvalues of the closed-loop system is the union of the eigenvalues of W g AW and. Because the magnitudes of all eigenvalues of are less than one and because all eigenvalues of W g AW have magnitudes less than one, all eigenvalues of the closedloop system have magnitudes less than one by construction, rendering asymptotic stability of the closed-loop system driven by the synergetic controller. The preceding analysis procedure can be extended to analyze the closed-loop stability of a class of nonlinear systems with matched nonlinearities (see, for example, page 548 in [42]). In particular, we consider a class of nonlinear systems that can be transformed into the so-called regular form: x 1;k1 x 2;k1 O B u A 11 A 12 x1;k O fx A 21 x k A 22 x k x 2;k B k f u k ~Ax k x k ~B f fx k ~B u u k (63) where x 1;k 2 R nm, x 2;k 2 R m, A 21 x k 2R mnm, and A 22 x k 2 R mm are state-dependent matrices; fx k is a q-dimensional vector of nonlinear functions; B f is a constant matrix of dimension m q; and B u 2 R mm is a constant invertible matrix. Note that in system (63), nonlinearities are in the range space of the input matrix B u ; that is, they affect the system dynamics in the same way as the input u k does. Therefore, B f B u K fu, where K fu is a constant matrix of dimension m q. Consider now the nonlinear system model (63) along with the performance index (3), where x k x k 1 2 1;k (64) x 2;k For the system (63), the control law that minimizes the performance index (3) is derived from the first-order difference equation (14) and is given by u k 2 B u 1 ~Ax k x k 2 B f fx k x k (65) Theorem 9 (closed-loop stability of dynamic system with matched nonlinearities): If the matrix is constructed so that det 2 B u, system (63) restricted to the manifold f g is asymptotically stable, and the pair A 11 ; A 12 is controllable, then the system (63) driven by the control law (65) is asymptotically stable. Proof: We begin by constructing an n nm real matrix: V V 11 V 21 ; V 11 O; V 21 O (66) where V 11 2 R nmnm is invertible, such that V ~B u 1 exists. Let V ~B u 1 V 11 O 1 V 1 11 O O B u O Therefore, V g ~B g V ~B u Vg V V g ~B u u ~B g uv ~B g ~ u B u and hence B 1 u Vg ~B g u I nm O I O I n m V g V I nm and V g ~B u O (67) Observe that the nonlinearities A 21 x k and A 22 x k are matched and, because of the structure of V g, the matrix V g ~ Ax k V 1 11 A 11 V 1 11 A 12
8 1568 NUAWARDHANA, ŻAK, AND CROLEY is a constant matrix. It is easy to show that the pair A 11 ; A 12 is controllable if and only if the pair V g ~ AV; V g ~ A ~B u is controllable. Therefore, for any set of complex numbers f 1 ; 2 ;...; nm g, symmetric with respect to the real axis, we can always find a matrix F such that f 1 ; 2 ;...; nm geigv g ~ AV V g ~ A ~B u F The matrix F can be determined by using any available method for pole placing to shift the poles of V g ~ AV V g ~ A ~B u F into the desired locations inside the open unit disc. We proceed by performing the following manipulations: V g ~ AV V g ~ A ~B u F V g ~ AV ~B u F (68) Let W g V g and W V ~B u F. Note that by Eq. (67), W g W V g V ~B u FV g V I nm (69) We then use the same arguments as in the Proof of Theorem 8 and transform the closed-loop system dynamics into the new basis using the following state-space transformation: z k z 1;k x z 2;k k (7) where z 1;k is composed of state variables that characterize the closedloop trajectories of Eq. (63) confined on the manifold f g, and z 2;k comprises state variables characterizing trajectories approaching the manifold f g. The inverse transformation of Eq. (7) is x W 1; g zk W ~B u z k (71) The closed-loop system dynamics in the new coordinates are z 1;k1 z 2;k1 ~Ax k ~B u 2 B u 1 ~Ax k W ~B u z k ~B u 2 B u x k 1 2 B f ~B f fx k " Ax ~ k W W g Ax ~ # k ~B u z k (72) O By construction, the eigenvalues of the constant matrix W g Ax ~ k W are located inside the open unit disc. The eigenvalues of are also located inside the open unit disc. Therefore, the closed-loop system (72) is asymptotically stable. Using the results of the preceding analysis, we can deduce an algorithm for constructing a linear manifold fx g so that the nonlinear dynamics constrained to this manifold are asymptotically stable. This invariant manifold construction algorithm was first proposed in [43,44] for constructing a switching surface for slidingmode controllers: 1) elect V so that the rank of V B is n. 2) Determine the matrix F so that the eigenvalues of V g AV V g ABF are at desirable locations. 3) et W V BF. 4) Determine the matrix as the last m rows of the inverse of W B. VIII. Discrete-Time ynergetic Optimal Integral Tracking Control In this section, we use the results from the previous sections to develop a discrete-time integral tracking controller for a class of nonlinear systems that can be transformed into a regular form [Eq. (63)]. uppose that the output of the system (63) is y k C 1 C 2 x 1;k x 2;k (73) where y k 2 R p. The control design objective is to construct a control strategy so that the output y k tracks a piecewise constant reference vector r 2 R p. We now define a new state vector x r;k and its respective state equation: x r;k1 x r;k r k y k (74) where > is a design parameter. Our objective is to design a control strategy so that lim k!1 x r;k1 x r;k Therefore, r 1 y 1, and the performance index (3) is minimized, where k is to be determined. To proceed, define the augmented state-variable vector as 2 3 x r;k ~x k R N x 1;k x 2;k where N n p. The augmented system dynamics are governed by the state equation: x r;k1 I C 1 C 2 x r;k 6 4 x ;k1 5 4 O A 11 A x 7 1;k 5 x 2;k2 x 2;k O A 21 x k A 22 x k I O O O 5r k 4 O fx k 4 O 5u k O B f that can be compactly expressed as B u ~x k1 ~Ax k ~x k ~B r r k ~B f fx k ~B u u k (75) Note that for the system (75), the matching condition implies that there exists a constant matrix K fu 2 R mq such that B f B u K fu as was demonstrated for Eq. (63). Therefore, ~B f ~B u K fu. Let k be defined as k r 1 2 ~x k ~x k (76) We next employ the equations k1 k and k ~x k to construct the discrete-time synergetic optimal control law. Combining the preceding two equations and rearranging the resulting equation gives ~x k1 k ~Ax k ~x k ~B r r k ~B f fx k ~B u u k k olving the preceding equation for u k, we obtain u k ~B u 1 ~ Ax k ~x k ~B r r k ~B f fx k k (77) ubstituting the control law (77) into Eq. (75) leads to the following closed-loop system model: ~x k1 I ~B u ~B u 1 ~Ax k ~B u ~B u 1 ~x k ~B u ~B u 1 ~B r r k ~B r r k (78)
9 NUAWARDHANA, ŻAK, AND CROLEY 1569 We now use the method advanced in the previous section to analyze the closed-loop tracking-system stability. In particular, we employ the same coordinate transformation (6) that maps x k into the z > 1;k z> 2;k > -coordinate system. However, matrix dimensions are adjusted so that W g 2 R NmN and 2 R mn. The inverse transformation that maps z k into ~x k is given by Eq. (61), where matrix dimensions for the tracking problems are adjusted so that W 2 R NNm and ~B u 2 R Nm. Applying the preceding transformation to Eq. (78) yields " z 1;k1 Ax ~ k W W g ~ # Ax k ~B u z1;k z 2;k1 O z 2;k ~B u ~B u 1 ~B r ~B r r k " Ax ~ k W W g Ax ~ # " k ~B u z1;k O # ~B r r k O z 2;k ~B r ~B r " Ax ~ k W W g Ax ~ # k ~B u z1;k ~B r r k (79) O O z 2;k Because the plant model is in the regular form with matched nonlinearities, the matrix W g ~ Ax k W is a constant matrix. Observe that in the absence of r k, the dynamics of Eq. (79) are asymptotically stable because the magnitudes of all eigenvalues of are less than one and, by construction, the magnitudes of all eigenvalues of the constant matrix W g ~ Ax k W are also less than one. In the steady state, the following conditions hold: z 2;k (8) z 1;k W g ~ Ax k Wz 1;k W g ~B r r k (81) We will now show that the condition (81) is equivalent to the condition y k r k (82) We proceed as follows. Without loss of generality, let the matrices V 11 in Eq. (66) and F in Eqs. (68) and (69) be defined as V 11 I p O and F F O I 1 F 2 (83) nm The matrices V 11 and F defined in Eq. (83) yield the following matrices W and W g : 2 I p O 3 W 4 O I nm 5 and B u F 1 B u F 2 (84) W g I p O pnm O pm O nmp I nm O nmm " x r I # p C 2 B u F 1 C 1 C 2 B u F 2 xr x 1 A 12 B u F 1 A 11 A 12 B u F 2 x 1 " I # p r (86) O where we removed the subscript k to indicate that we analyze the steady-state condition. In the steady state, ~x k ; therefore, From the first row of Eq. (86), we obtain r x r 1 x 1 2 x 2 (87) C 2 B u F 1 x r C 1 C 2 B u F 2 x 1 r (88) ubstituting Eqs. (85) and (87) into Eq. (88) and performing some algebraic manipulations gives C 2 B u r x r C 1 x 1 C 2 B u 1 x 1 r C 2 B u 1 x 1 2 x 2 C 1 x 1 C 2 B u 1 x 1 r C 2 x 2 C 1 x 1 r yr Hence, in the steady state, y k r k. We summarize the results of this section in the following theorem. Theorem 1 (discrete-time synergetic optimal integral tracking): If the following conditions are satisfied, then the system (75) driven by the control strategy (77) asymptotically tracks a constant reference signal r k : 1) The matrix in Eq. (76) is constructed so that det 2 B u. 2) The system (75) restricted to the manifold f ~x k : g is asymptotically stable. 3) The pair A 11 ; A 12 is controllable. IX. Example We consider the problem of tracking control design for a nonlinear helicopter model using the proposed discrete-time synergetic control law. We employ the continuous-time nonlinear helicopter model developed by Pallet et al. [45 47]. In Fig. 1, we present the configuration of this helicopter control model. A. Nonlinear Dynamics The vertical dynamics of the helicopter system mounted on a stand are governed by the following set of differential equations: p h! 2 a 1 a 2 c a 3 a 4 c a 5 _h a 6 _h 2 a 7 (89) _! a 8! a 1! 2 sin c a 9! 2 a 11 u 1 (9) From the matrix identity W ~B u W g I N we obtain the following relations: B u F 1 B u r and B u F 2 B u 1 (85) Using the matrices defined in Eq. (84) and taking into account the relation that follows from Eq. (7), which has the form z 1 W g x > r x > 1 >, we can represent Eq. (81) as Fig. 1 Helicopter system on a stand as modeled in [45 47].
10 157 NUAWARDHANA, ŻAK, AND CROLEY _ c a 13 c a 14! 2 sin c a 15 _ c a 12 u 2 (91) where h denotes the helicopter altitude in meters,! denotes the rotor blade angular velocity in rad=s, and c denotes the collective pitch angle of rotor blades in radians. The helicopter plant uses two actuators, rotary wing throttle u 1 and collective wing blade pitch u 2. The parameters for the nonlinear model [Eqs. (89 91)] are determined in [45 47] and are given in Table 1. The system outputs for this problem are the helicopter altitude h and the collective pitch angle of the rotor blades c. To achieve a set of smooth control command actuations, integrators are inserted in front of the two input channels as in [48,49], so that _u 1 v 1 and _u 2 v 2 (92) respectively. The variables v 1 and v 2 also serve as auxiliary control variables in the following control design process. Adding integrators to the dynamic equations (89 91) increases the overall order of helicopter system dynamics. This operation of adding integrators yields the resulting system dynamics in the so-called extended form [5,51]. where 9 _z 1 z 2 _z 2 z 3 _z 3 z 4 >= _z 4 A z 4 zbz 4 zv 1 _z 5 z 6 _z 6 z 6 _z 7 A z 7 zbz 7 zv >; 2 A z 4 z a 8 2 2a sin z 5 a 1 1 2z 6 cos z 5 p 2a a 1 a 2 z 5 a 3 a 4 z a 2 z a 4z 6 a 3 a 4 z 5 1=2 2a 6 z 2 3 a 5 2a 6 z 2 z a 2z 7 1a 2 4z 7 a 3 a 4 z 2 1=2 1 4 a2 4 z2 6 a 3 a 4 z 5 1=2 ; B z 4 z2 p 1a 1 a 2 z 5 a 3 a 4 z 5 ; A z 7 za 13z 6 2a sin z 5 a z 6 cos z 5 a 15 z 7 ; B z 7 z1 (95) B. Development of a Discrete-Time tate-pace Helicopter Dynamic Model Define the following state vector z such that z 1 h pz 2 _h z 3! 2 a 1 a 2 c a 3 a 4 c a 5 _h a 6 _h 2 a p 7 z 4 2!a 1 a 2 c a 3 a 4 c a 8! a 1! 2 sin c a 9! 2 a 11 u 1 p a 5 2a 6 _h! 2 a 1 a 2 c a 3 a 4 c a 5 _h a 6 _h 2 a 7! 2 a 2 c _ 1a _ 2 4 c a 3 a 4 c 1=2 z 5 c z 6 _ c z 7 a 13 c a 14! 2 sin c a 15 c _ a 12 9 >= >; (93) In terms of elements of the state vector z, the physical variables of the helicopter dynamics are 9 h z 1 _h z 2! 1 >= c z 5 _ c z 6 u 1 z 4a 5 2a 6 z 2 1 2a p 1a 2 z 5 a 3 a 4 z 5 a 5 z 2 a 6 z 2 p 2 a a 1 a 2 z 5 a 3 a 4 z a 2z z 6a 3 a 4 z 5 1=2 p a 2 1 a 1 a 2 z 5 a 3 a 4 z a sin z 5 a a 11 z u 2 z 7 a 13 z 5 a 3 a 5 z 2 a 6 z 2 2 a 7 14 p sin z 5 a 15 z 6 a 12 >; a 1 a 2 z 5 a 3 a 4 z 5 (94) where s z 1 3 a 5 z 2 a 6 z 2 2 a 7 p a 1 a 2 z 5 a 3 a 4 z 5 To proceed with the state-space description of the helicopter dynamics, we define 2 z 4 a 5 2a 6 z 2 z 3 1 2a 2z 6 1 a 2 4z 6 a 3 a 4 z 5 1=2 p 2 1 a 1 a 2 z 5 a 3 a 4 z 5 The helicopter vertical motion dynamics in the z coordinates have the following state-space form: The output equation for the state-space model (95) is y h c z 1 z z Cz (96) The control design for the helicopter vertical dynamics employs an integral tracking approach to achieve zero steady-state tracking errors. For each of the output variables in Eq. (96), we introduce additional state variables representing the integral of tracking error. Consequently, the helicopter vertical dynamics design model (95) is augmented by the following state equations:
11 NUAWARDHANA, ŻAK, AND CROLEY 1571 Table 1 Parameters for helicopter vertical dynamics a 1 5: a 6 :1, s 1 a 11 13:92, s 1 a 2 1: a 7 g7:86, m=s 2 a 12 :5436 8:, s 2 a 3 2: a 8 :7, s 1 a 13 8:, s 2 a 4 1: a 9 :28 a 14 :1 a 5 :1, s 1 a 1 :28 a 15 65:, s 1 _z hr z 1 h r ; _z c;r z 5 c;r (97) We next employ the Euler discretization method to obtain a discrete-time model of the nonlinear augmented system (95 97). The time-continuous model is nonlinear and therefore, unlike in the linear case, we do not have available to us discretization formulas that yield an exact discrete-time model. We observe, however, that the system dynamics (95) and (97) are locally Lipschitz, which makes the discrete-time Euler approximate model close to the exact discretetime model of the dynamics (95) and (97). For more details on the subject of discretization of nonlinear models, we refer to [52,53]. We now define the following discrete-time state variables and their corresponding continuous-time state variables: A x 5 x ka z 4 z kj zx ; B x 5 x kb z 4 z kj zx ; A x 9 x ka z 7 z kj zx B x 9 x kb z 7 z kj zx Note that the closed-loop dynamics (98) constitute a ninth-order discrete-time system and are driven by the auxiliary controls v 1;k and v 2;k. The output vector equation for the discrete-time system (98) is y k h k x 2;k c;k x 7;k (99) Careful examination of the system model (98) reveals that its nonlinearities satisfy the matching condition described in the preceding sections. Therefore, the method of discrete-time integral tracking from ec. VIII can be employed. C. Discrete-Time Tracking Control Design To proceed with the controller design, we let k 1;k 2;k > x k. The controllers v 1;k and v 2;k will be designed to optimize the performance index: x 1;k x 2;k x 3;k x 4;k x 5;k x 6;k x 7;k x 8;k x > 9;k z hr k z 1 k z 2 k z 3 k z 4 k z k z c;r 5k z 6 k z 7 k > where denotes the sampling time and k denotes the sampling sequence index. Let h r;k and c;r;k be the respective values of h r and c;r at time t k. Using the previously defined state variables and references, the discrete-time Euler approximation model for the helicopter vertical dynamics is x 1;k x 2;k x 2;k x 3;k x 3;k x 4;k x 4;k x 5;k x 5;k A x 5 x k 1 x 6;k x 7;k x 7;k x 8;k x 8;k x 7 6 9;k 5 4 x 9;k A x 9 x k 3 c;r;k x 1;k1 x 2;k1 x 3;k1 x 4;k1 x 5;k1 x 6;k1 x 7;k1 x 8;k1 x 9;k1 2 h r;k 3 " B x 5 x # kv 1;k B x 9 x kv 2;k (98) J X1 > k k k1 k > T 1 T k1 k (1) kk 2 where we select T 1 T 2 :98 to obtain a simple form of the design parameter given next. ubstituting T diagft 1 ; T 2 g into the expression for given in Proposition 1 yields diagf1=3; 1=3g. The construction of both 1;k and 2;k ensures that the dynamics of Eq. (98) constrained to the manifold f 1 ; 2 g are asymptotically stable. Using the analysis results of ec. VIII and the invariant manifold construction algorithm given at the end of ec. VIII, we proceed with the constructive algorithm by selecting V so that the rank of V B x is 9. In this case, we select V B x where V g can be directly determined from the first 7 rows of the matrix V B x 1. Next, we determine the matrix F so that the eigenvalues of V g AV V g ABF are at desirable locations. These eigenvalues characterize the closed-loop dynamics restricted to the manifold. In this example, we construct 1;k and 2;k so that the system (98) restricted to 1) 1 has its discrete-time system poles at 1 f:967; :9187 j:18; :9285 g and 2) 2 has its discrete-time system poles at where 2 f:9492; :9126 j:133 g
12 1572 NUAWARDHANA, ŻAK, AND CROLEY Let F be a solution that yields to the preceding eigenvalues. Furthermore, let F have the following structure: F f 11 f 12 f 13 f 14 (11) f 25 f 26 f where f ij is the ijth nonzero entry of the matrix F. Hence, W V B x F f 11 f 12 f 13 f h [m] Time k Fig. 2 A plot of the helicopter altitude h versus time. f 25 f 26 f 27 and we can deduce from the last step of the constructive algorithm of ec. VII that is the last two rows of W B x 1. Therefore, 1;k and 2;k have the following structure: 1;k f 11 f 12 f 13 f 14 1 x 2;k f 25 f 26 f 27 1 k where s ij s are nonzero entries. Using the matrix F from Eq. (11), we obtain 1;k x 5;k 3:2728x 4;k 4:165x 3;k 2:199x 2;k :4481x 1;k (12) 2;k x 9;k 2:2557x 8;k 1:6694x 7;k :3971x 6;k : (13) The control strategy for the discrete-time tracking problem of a discrete-time helicopter model is obtained by substituting Eqs. (12) and (13) into the control strategy (77). D. Discussion on Tracking Performance The helicopter is expected to change its altitude from h :75 to 1.25 m. To increase the altitude of a hovering helicopter, one can increase the collective pitch (which increases the lift from each rotor blade) while maintaining constant rotational speed, increase the rotational speed (which increases the lift from each rotor blade while maintaining a constant collective angle), or a combination of the two. To relieve the requirement for excessive rotor blade rotation, we simultaneously change the collective pitch angle from c :125 to.2 rad. imulation results depicting the performance of the constructed discrete-time synergetic controller are presented in Figs We can see in Fig. 2 that the helicopter altitude follows the prescribed altitude reference. In Fig. 3, we show the time history of the collective blade pitch angle along with the control effort to drive the collective pitch dynamics. Observe that the collective pitch dynamics follow the prescribed reference collective blade pitch. The control strategies developed using the discrete-time synergetic control method also prevent excessive main rotor rotation because, as it is shown in Fig. 4, the rotor rotation does not excessively increase as the helicopter altitude rises. Note that as a result of using the proposed discrete-time control strategies, the helicopter rotor angular velocity decreases as the higher altitude is commanded. Also shown in this figure is the control effort largely responsible for the profile of rotor blade angular velocity. θ c [rad] Time k 7 u 2 [rad/sec 2 ] Time k Fig. 3 Helicopter rotor blade collective pitch control: pitch angle c and pitch control effort u ω 13 [rad/sec] u 1 [rad/sec 2 ] k Time k Fig. 4 A plot of the helicopter rotor blade angular velocity versus time and the control effort u 1. X. Conclusions We proposed novel discrete-time control strategies for solving optimal control problems for a class of nonlinear dynamic systems. The optimal control problem is formulated using a special
MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem
MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear
More informationOPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28
OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive
More informationEE221A Linear System Theory Final Exam
EE221A Linear System Theory Final Exam Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2016 12/16/16, 8-11am Your answers must be supported by analysis,
More informationOptimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:
Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic
More informationLinear-Quadratic Optimal Control: Full-State Feedback
Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually
More informationTime-Invariant Linear Quadratic Regulators!
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationQuadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
.. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if
More informationTime-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationTheory in Model Predictive Control :" Constraint Satisfaction and Stability!
Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time
More informationCDS 110b: Lecture 2-1 Linear Quadratic Regulators
CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,
More informationHere represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.
19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationLinear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.
POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems
More information1. Definition of a Polynomial
1. Definition of a Polynomial What is a polynomial? A polynomial P(x) is an algebraic expression of the form Degree P(x) = a n x n + a n 1 x n 1 + a n 2 x n 2 + + a 3 x 3 + a 2 x 2 + a 1 x + a 0 Leading
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More informationRobotics. Control Theory. Marc Toussaint U Stuttgart
Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,
More informationEE C128 / ME C134 Feedback Control Systems
EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of
More informationChapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationMODERN CONTROL DESIGN
CHAPTER 8 MODERN CONTROL DESIGN The classical design techniques of Chapters 6 and 7 are based on the root-locus and frequency response that utilize only the plant output for feedback with a dynamic controller
More informationMTH 5102 Linear Algebra Practice Final Exam April 26, 2016
Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationOptimal Polynomial Control for Discrete-Time Systems
1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More informationVector Spaces ปร ภ ม เวกเตอร
Vector Spaces ปร ภ ม เวกเตอร 1 5.1 Real Vector Spaces ปร ภ ม เวกเตอร ของจ านวนจร ง Vector Space Axioms (1/2) Let V be an arbitrary nonempty set of objects on which two operations are defined, addition
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationStabilization of a 3D Rigid Pendulum
25 American Control Conference June 8-, 25. Portland, OR, USA ThC5.6 Stabilization of a 3D Rigid Pendulum Nalin A. Chaturvedi, Fabio Bacconi, Amit K. Sanyal, Dennis Bernstein, N. Harris McClamroch Department
More informationVector Spaces ปร ภ ม เวกเตอร
Vector Spaces ปร ภ ม เวกเตอร 5.1 Real Vector Spaces ปร ภ ม เวกเตอร ของจ านวนจร ง Vector Space Axioms (1/2) Let V be an arbitrary nonempty set of objects on which two operations are defined, addition and
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationAustralian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System
Australian Journal of Basic and Applied Sciences, 3(4): 4267-4273, 29 ISSN 99-878 Modern Control Design of Power System Atef Saleh Othman Al-Mashakbeh Tafila Technical University, Electrical Engineering
More informationUCLA Chemical Engineering. Process & Control Systems Engineering Laboratory
Constrained Innite-Time Nonlinear Quadratic Optimal Control V. Manousiouthakis D. Chmielewski Chemical Engineering Department UCLA 1998 AIChE Annual Meeting Outline Unconstrained Innite-Time Nonlinear
More informationNOTES FOR LINEAR ALGEBRA 133
NOTES FOR LINEAR ALGEBRA 33 William J Anderson McGill University These are not official notes for Math 33 identical to the notes projected in class They are intended for Anderson s section 4, and are 2
More informationOPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS
STUDIES IN ASTRONAUTICS 3 OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS JOHNL.JUNKINS Texas A&M University, College Station, Texas, U.S.A. and JAMES D.TURNER Cambridge Research, Division of PRA, Inc., Cambridge,
More informationAppendix A Solving Linear Matrix Inequality (LMI) Problems
Appendix A Solving Linear Matrix Inequality (LMI) Problems In this section, we present a brief introduction about linear matrix inequalities which have been used extensively to solve the FDI problems described
More informationProblem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations,
SEMI-GLOBAL RESULTS ON STABILIZATION OF LINEAR SYSTEMS WITH INPUT RATE AND MAGNITUDE SATURATIONS Trygve Lauvdal and Thor I. Fossen y Norwegian University of Science and Technology, N-7 Trondheim, NORWAY.
More informationHomework Solution # 3
ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found
More informationLecture 4 Continuous time linear quadratic regulator
EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon
More informationSuppose that we have a specific single stage dynamic system governed by the following equation:
Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 3 p 1/21 4F3 - Predictive Control Lecture 3 - Predictive Control with Constraints Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 3 p 2/21 Constraints on
More informationLecture 2 and 3: Controllability of DT-LTI systems
1 Lecture 2 and 3: Controllability of DT-LTI systems Spring 2013 - EE 194, Advanced Control (Prof Khan) January 23 (Wed) and 28 (Mon), 2013 I LTI SYSTEMS Recall that continuous-time LTI systems can be
More informationFeedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field
Vol. 4, No. 4, 23 Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field Ashraf H. Owis Department of Astronomy, Space and Meteorology, Faculty of Science, Cairo University Department
More informationLinear Quadratic Regulator (LQR) Design II
Lecture 8 Linear Quadratic Regulator LQR) Design II Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Outline Stability and Robustness properties
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationLINEAR SYSTEMS AND MATRICES
CHAPTER 3 LINEAR SYSTEMS AND MATRICES SECTION 3. INTRODUCTION TO LINEAR SYSTEMS This initial section takes account of the fact that some students remember only hazily the method of elimination for and
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith
More informationChapter 2: Linear Independence and Bases
MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Guidance and Control Introduction and PID Loops Dr. Kostas Alexis (CSE) Autonomous Robot Challenges How do I control where to go? Autonomous Mobile Robot Design Topic:
More informationCourse Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)
Amme 35 : System Dynamics Control State Space Design Course Outline Week Date Content Assignment Notes 1 1 Mar Introduction 2 8 Mar Frequency Domain Modelling 3 15 Mar Transient Performance and the s-plane
More informationThe Inverted Pendulum
Lab 1 The Inverted Pendulum Lab Objective: We will set up the LQR optimal control problem for the inverted pendulum and compute the solution numerically. Think back to your childhood days when, for entertainment
More informationLinear Algebra Homework and Study Guide
Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of
More informationECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming
ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear
More informationAMME3500: System Dynamics & Control
Stefan B. Williams May, 211 AMME35: System Dynamics & Control Assignment 4 Note: This assignment contributes 15% towards your final mark. This assignment is due at 4pm on Monday, May 3 th during Week 13
More informationFundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved
Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural
More informationOptimal control and estimation
Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011
More informationEE363 homework 7 solutions
EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationEL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)
EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the
More informationThe Implicit Function Theorem with Applications in Dynamics and Control
48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition 4-7 January 2010, Orlando, Florida AIAA 2010-174 The Implicit Function Theorem with Applications in Dynamics
More informationMA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4
MA B PRACTICAL - HOMEWORK SET SOLUTIONS (Reading) ( pts)[ch, Problem (d), (e)] Solution (d) We have matrix form Ax = b and vector equation 4 i= x iv i = b, where v i is the ith column of A, and 4 A = 8
More informationLaplace Transform Analysis of Signals and Systems
Laplace Transform Analysis of Signals and Systems Transfer Functions Transfer functions of CT systems can be found from analysis of Differential Equations Block Diagrams Circuit Diagrams 5/10/04 M. J.
More informationExtra Problems for Math 2050 Linear Algebra I
Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as
More informationChapter 5. Pontryagin s Minimum Principle (Constrained OCP)
Chapter 5 Pontryagin s Minimum Principle (Constrained OCP) 1 Pontryagin s Minimum Principle Plant: (5-1) u () t U PI: (5-2) Boundary condition: The goal is to find Optimal Control. 2 Pontryagin s Minimum
More informationChapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationMulti-Robotic Systems
CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May
More informationInverses and Elementary Matrices
Inverses and Elementary Matrices 1-12-2013 Matrix inversion gives a method for solving some systems of equations Suppose a 11 x 1 +a 12 x 2 + +a 1n x n = b 1 a 21 x 1 +a 22 x 2 + +a 2n x n = b 2 a n1 x
More informationUNCERTAINTY MODELING VIA FREQUENCY DOMAIN MODEL VALIDATION
AIAA 99-3959 UNCERTAINTY MODELING VIA FREQUENCY DOMAIN MODEL VALIDATION Martin R. Waszak, * NASA Langley Research Center, Hampton, Virginia Dominick Andrisani II, Purdue University, West Lafayette, Indiana
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationVariable-gain output feedback control
7. Variable-gain output feedback control 7.1. Introduction PUC-Rio - Certificação Digital Nº 611865/CA In designing control laws, the usual first step is to describe the plant at a given operating point
More informationTrajectory-tracking control of a planar 3-RRR parallel manipulator
Trajectory-tracking control of a planar 3-RRR parallel manipulator Chaman Nasa and Sandipan Bandyopadhyay Department of Engineering Design Indian Institute of Technology Madras Chennai, India Abstract
More informationControl, Stabilization and Numerics for Partial Differential Equations
Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationIntroduction. Vectors and Matrices. Vectors [1] Vectors [2]
Introduction Vectors and Matrices Dr. TGI Fernando 1 2 Data is frequently arranged in arrays, that is, sets whose elements are indexed by one or more subscripts. Vector - one dimensional array Matrix -
More informationLecture 2: Discrete-time Linear Quadratic Optimal Control
ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon
More informationIn: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,
More informationIEOR 265 Lecture 14 (Robust) Linear Tube MPC
IEOR 265 Lecture 14 (Robust) Linear Tube MPC 1 LTI System with Uncertainty Suppose we have an LTI system in discrete time with disturbance: x n+1 = Ax n + Bu n + d n, where d n W for a bounded polytope
More informationStabilization and Passivity-Based Control
DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive
More informationEE363 homework 2 solutions
EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationCHAPTER 1. Introduction
CHAPTER 1 Introduction Linear geometric control theory was initiated in the beginning of the 1970 s, see for example, [1, 7]. A good summary of the subject is the book by Wonham [17]. The term geometric
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationHamilton-Jacobi-Bellman Equation Feb 25, 2008
Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1
More informationCoordinated Tracking Control of Multiple Laboratory Helicopters: Centralized and De-Centralized Design Approaches
Coordinated Tracking Control of Multiple Laboratory Helicopters: Centralized and De-Centralized Design Approaches Hugh H. T. Liu University of Toronto, Toronto, Ontario, M3H 5T6, Canada Sebastian Nowotny
More informationOptimal Control. McGill COMP 765 Oct 3 rd, 2017
Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationDynamical Systems & Lyapunov Stability
Dynamical Systems & Lyapunov Stability Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Ordinary Differential Equations Existence & uniqueness Continuous dependence
More informationA Model-Free Control System Based on the Sliding Mode Control Method with Applications to Multi-Input-Multi-Output Systems
Proceedings of the 4 th International Conference of Control, Dynamic Systems, and Robotics (CDSR'17) Toronto, Canada August 21 23, 2017 Paper No. 119 DOI: 10.11159/cdsr17.119 A Model-Free Control System
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationSteady State Kalman Filter
Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:
More informationBasics of Calculus and Algebra
Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array
More information