Robust and Adaptive Control with Aerospace Applications

Similar documents
Revised on 06 Aug 2014 Part I

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

CALIFORNIA INSTITUTE OF TECHNOLOGY

Linear State Feedback Controller Design

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization

CONTROL DESIGN FOR SET POINT TRACKING

Laplace Transform Analysis of Signals and Systems

AFRL MACCCS Review. Adaptive Control of the Generic Hypersonic Vehicle

Robust Control 3 The Closed Loop

Chapter 2. Classical Control System Design. Dutch Institute of Systems and Control

Autonomous Mobile Robot Design

Control of Mobile Robots

Aircraft Stability & Control

MEM 355 Performance Enhancement of Dynamical Systems

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 12: Multivariable Control of Robotic Manipulators Part II

(Continued on next page)

ECE382/ME482 Spring 2005 Homework 6 Solution April 17, (s/2 + 1) s(2s + 1)[(s/8) 2 + (s/20) + 1]

Design Methods for Control Systems

1 (20 pts) Nyquist Exercise

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

3. Fundamentals of Lyapunov Theory

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Adaptive Output Feedback Based on Closed-Loop. Reference Models for Hypersonic Vehicles

MEM 355 Performance Enhancement of Dynamical Systems

Control Systems. Design of State Feedback Control.

Chapter 9 Robust Stability in SISO Systems 9. Introduction There are many reasons to use feedback control. As we have seen earlier, with the help of a

X 2 3. Derive state transition matrix and its properties [10M] 4. (a) Derive a state space representation of the following system [5M] 1

Topic # Feedback Control Systems

Control of Electromechanical Systems

CDS 101/110a: Lecture 8-1 Frequency Domain Design

Linear Quadratic Regulator (LQR) Design II

ECEN 605 LINEAR SYSTEMS. Lecture 20 Characteristics of Feedback Control Systems II Feedback and Stability 1/27

Adaptive Control of a Generic Hypersonic Vehicle

Intro to Frequency Domain Design

Time-Invariant Linear Quadratic Regulators!

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

EE363 homework 2 solutions

EEE 184: Introduction to feedback systems

5. Observer-based Controller Design

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

Chapter III. Stability of Linear Systems

Analysis of Discrete-Time Systems

Topic # Feedback Control Systems

SAMPLE SOLUTION TO EXAM in MAS501 Control Systems 2 Autumn 2015

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Robot Manipulator Control. Hesheng Wang Dept. of Automation

Hybrid Systems Course Lyapunov stability

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Controls Problems for Qualifying Exam - Spring 2014

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

EECS C128/ ME C134 Final Wed. Dec. 14, am. Closed book. One page, 2 sides of formula sheets. No calculators.

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

EML5311 Lyapunov Stability & Robust Control Design

Time Response Analysis (Part II)

Today (10/23/01) Today. Reading Assignment: 6.3. Gain/phase margin lead/lag compensator Ref. 6.4, 6.7, 6.10

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

Root Locus. Motivation Sketching Root Locus Examples. School of Mechanical Engineering Purdue University. ME375 Root Locus - 1

Analysis of Discrete-Time Systems

Lecture 6. Chapter 8: Robust Stability and Performance Analysis for MIMO Systems. Eugenio Schuster.

Introduction to Feedback Control

Mech 6091 Flight Control System Course Project. Team Member: Bai, Jing Cui, Yi Wang, Xiaoli

Performance of Feedback Control Systems

Control System Design

ANALYSIS AND REAL-TIME IMPLEMENTATION OF STATE-DEPENDENT RICCATI EQUATION CONTROLLED SYSTEMS EVRIN BILGE ERDEM

A Comparative Study on Automatic Flight Control for small UAV

Control Systems. Root Locus & Pole Assignment. L. Lanari

ECSE 4962 Control Systems Design. A Brief Tutorial on Control Design

ADAPTIVE control of uncertain time-varying plants is a

Learn2Control Laboratory

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011

Control Systems I Lecture 10: System Specifications

Control of industrial robots. Centralized control

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Robust Control 5 Nominal Controller Design Continued

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Frequency Response-Design Method

EE363 homework 7 solutions

Exam. 135 minutes, 15 minutes reading time

Modeling and Analysis of Dynamic Systems

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

MAE 143B - Homework 9

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Robust Adaptive Control with Improved Transient Performance

SUM-OF-SQUARES BASED STABILITY ANALYSIS FOR SKID-TO-TURN MISSILES WITH THREE-LOOP AUTOPILOT

Limits and Their Properties

Stability of CL System

Lecture 9 Nonlinear Control Design

Chemical Process Dynamics and Control. Aisha Osman Mohamed Ahmed Department of Chemical Engineering Faculty of Engineering, Red Sea University

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

H 2 Adaptive Control. Tansel Yucelen, Anthony J. Calise, and Rajeev Chandramohan. WeA03.4

2. Higher-order Linear ODE s

Synthesis via State Space Methods

Georgia Institute of Technology Nonlinear Controls Theory Primer ME 6402

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of

1 (30 pts) Dominant Pole

Squaring-Up Method for Relative Degree Two Plants

Transcription:

SPRINGER ADAVANCED EXBOOKS IN CONROL AND SIGNAL PROCESSING Robust and Adaptive Control with Aerospace Applications he Solutions Manual Eugene Lavretsky, Ph.D., Kevin A. Wise, Ph.D. /9/4 he solutions manual covers all theoretical problems from the tetbook and discusses several simulation oriented eercises.

Robust and Adaptive Control Chapter Eercise.. Detailed derivations of the aircraft dynamic modes can be found in many standard flight dynamics tetbooks, such as []. Further insights can be obtained through approimations of these modes and their dynamics using time-scale separation properties of the equations that govern fight dynamics [3]. he time-scale separation concept will now be illustrated. Set control inputs to zero and change order of the states in (.7) to: q v. hen the longitudinal dynamics can be partitioned into two subsystems: Z Z q ZV gsin V V V V q q M Mq MV v v X XV gcos or equivalently, A A A A For most aircraft, their speeds are much greater than the vertical force and moment ZV sensitivity derivatives due to speed, and so:, MV V. Also g sin V and so: A. hese assumptions are at the core of the time-scale separation principle: he aircraft fast dynamics (the short-period) is almost independent and decoupled of the vehicle slow dynamics (the phugoid). With A, the shortperiod dynamics are: Z Zq V V q q M M q he short-period characteristic equation is:

Z Z q Z Z Zq det V V Mq Mq M V V V M M q Comparing this to the second order polynomial, the short-period natural frequency and the damping ratio. sp, gives M sp sp sp Z q Zq V, sp V Zq M V M For open-loop stable aircraft, Z V q M Mq V further simplification of the vehicle short-period modes. Z,,, which allows for M, sp sp M q Z V M Approimations for the phugoid dynamics can be derived by setting the shortperiod dynamics to zero and computing the remaining modes. his is often referred to as the residualization of fast dynamics. A A A A A A A A A A A A A A, he phugoid dynamics approimation takes the form or equivalently Z Zq ZV gsin v cos XV g X v V V V V M M q MV

4 Robust and Adaptive Control Eplicit derivations for the phugoid mode natural frequency and damping ratio can be found in [3]. Eercise.3. Similar to Eercise. approach, based on the time-scale separation principle, can also be applied to the lateral-directional dynamics (.9) to show that the dynamics (.3) represent the fast motion and are comprised of the vehicle roll subsidence and the dutch-roll modes. he slow mode (called the spiral ) has the root at the origin. he reader is referred to [], [], and [3] for details. Chapter Eercise.. Same as Eample.. Eercise.. Scalar system is controllable. Use (.38). ; ; ; ; A B Q R Q P PA A PQPBR B P with boundary condi- he Riccati equation is tion P. P dp P P, dp dt t Pt t dp dt P P From integral table P t t e t e he closed loop block diagram is * u R B P t P t

Eercise.3. For this problem we use the algebraic Riccati equation (.45) to solve for the constant feedback gain matri. From the problem set up A ; B ; Q ; R Check controllability (.5) for the system: Pc B AB. RK Check observability of the unstable modes through the penalty matri (.53) and (.54) for the system. Factor Q into square roots (.53): Check controllability (.5) for the system: QQQ. Using (.54), gives Q QA. Substituting the problem data into the ARE (.45), yields: RK l m P m n, and therefore: PA A P Q PBR B P l m l m l m l m m n m n m n m n mm ; m, lnmn, mn ; n u R B P m n l m m n Eercise.4. Use Matlab. Eercise.5. For this problem we use the algebraic Riccati equation (.45) to solve for the constant feedback gain matri. From the problem set up 4 A ; B ; Q ; R. Check controllability (.5) for the system: P B AB c RK.

6 Robust and Adaptive Control Check observability of the unstable modes through the penalty matri (.53) and (.54) for the system. Factor Q into square roots (.53): Check controllability 4 (.5) for the system: QQQ. Q Using (.54): QA. RK PA A P Q PBR B P l m l m 4 l m l m m n m n m n m n l 4 l m l m m l m m n m n l 4 m mn m l m mn n m 4; m lmn mn ; n l m u R B P m n m n K k k lqr he closed loop system matri is Acl ABKlqr ; Acl j j J q Q u Ru dt. We answer this problem using Eercise.6. (.85). When well posed the LQR problem guarantees a stable closed loop system. When q the overall penalty on the states goes to zero. From (.85) s s s s, so the closed loop poles will be the stable zeros of cl cl ol ol is the open loop characteristic poly- ol ol. When the overall penalty on the control goes to zero. his creates a high gain situation. From (.89), some of the roots will go to infinity along asymptotes. ol sol s, where ol sdet sin A nomial. If any open loop poles in s are unstable, they are stable in s

hose that stay finite will approach the stable transmission zeros of the transfer function matri H. hese finite zeros shaped by the Q matri control the dynamic response of the optimal regulator. * * J * J Eercise.7. ) he HJ equation is min H H,, t. Substituting the system data into the equation, gives t u J * * J H l f r u u * * H J * J * u. u. Substitute this back into H to form H. u * * * J J r r J t with boundary conditions, J q r. * * ) If we try J t, P t g t w t then * * J P J g w. P g. Substitute back into the HJ equation. t P g w r P g r r P Pg g Group terms:, P P gr Pg wr g. We want this to hold for all. Get three equations inside the parentheses. o solve for the boundary conditions, epand and equate coefficients., J q r * P g w q r r P q g qr w qr,,

8 Robust and Adaptive Control Chapter 3 Eercise 3.. a) he suspended ball plant model is u. he open loop eigenvalues are: detsi A s, Since the model is in controllable canonic form, we see that it is controllable. he state feedback control law is u K which has two elements in the gain matri. he closed loop ABK A. system using the state feedback control is cl Acl k k k k s det cl det k sk si A s k sk s he poles are to be placed at -/, -, which gives a desired closed loop characteristic polynomial cl sss s s 3. Equate the coefficients with the gains: 3 cl s s s s ksk 3 3 K b) Here we need to design a full order observer that has poles at -4, -5. he output is defined to be y. C.An observability test shows the system is observable with this measurement. C Q CA RK i cl A full order observer has the form ˆ AˆBuK y yˆ he poles of the observer, A KC ˆ AK C ˆ BuK y o o o, are to be placed at -4, -5. o

ko k o AK oc ko k o. sk o det si A KoC det s ko s k o ko s s k sk s s s s K 9 o 4 5 9, o o Using observer feedback for the control u K ˆ, yields ˆ ABK K C ˆ K y, u K ˆ o 3 3 where K, 9 K o. Connecting the plant and controller to form the etended closed loop system, gives Substituting the matrices to form A BK ˆ KC ˆ o ABKKC o Acl o A cl, results in detsi Acl s s s 4s 5. c) For the reduced order observer, we will implement a Luenberger design that has C the form q Gq GyG3u. Choose the matri C ' to make the matri C ' C nonsingular. Define L L C '. hen the reduced order observer matrices and control law are given by: G C ' AL K CAL, G C ' AL K C ' AL K CAL K CAL K G C' BK CB, ˆ L q L L K y r r r r r 3 r r

Robust and Adaptive Control Where K r is the reduced order observer gain matri. First we need C ' : C C, L L I, L ; L C' C' C' Substituting into the observer matrices yields G C ' AL K CAL K r G C ' AL K C ' AL K CAL K CAL K K r r r r r G C' BK CB 3 r r he problem asks to place the observer pole at -6. hus G 6 which gives Kr 6 and G 35. So, q 6q35yu. o form the state estimate ˆ we use ˆ Lq L LK r y q y 6. he control is u Kˆ where K is computed from part c). 3 3 3 u KLqLLKr y q y q y 6 3 5 9 q 6q35yu 6q35y q y q y o form the controller as a single transfer function, take the Laplace transform and combine the above epressions. he block diagram showing the plant and controller is - u B si A Eercise 3.. Use file Chapter3_Eample3p5.m to solve this problem. Eercise 3.3. Use file Chapter3_Eample3p5.m to solve this problem. Eercise 3.4. Use file Chapter3_Eample3p5.m to solve this problem. s s 5 C y

Chapter 4 Eercise 4.. Use file Chapter4_Eample4p5.m to solve this problem. Eercise 4.. Use file Chapter4_Eample4p5.m to solve this problem. Eercise 4.3. Use file Chapter4_Eample4p5.m to solve this problem. Chapter 5 Eercise 5.. he following code will complete the problem: % Set up frequency vector w = logspace(-3,3,5); % Loop through each frequency and compute the det[i+kh] % he controller is a constant matri so keep it outside the loop K = [ 5... -. ]; _mnt = zeros(size(w));% pre-allocate for speed _rd = zeros(size(w)); _sr = zeros(size(w)); % he plant is in transfer function form. % Evaluate the plant at each frequency for ii=:numel(w), s = sqrt(-)*w(ii); d_h = s*(s+6.)*(s+3.3); H = [ 3.4/s -78/d_h ;.5/s -6.6/d_h ]; L = K*H; rd = eye()+l; sr = eye()+inv(l); _mnt(ii) = det(rd); _rd(ii) = min(svd(rd));% this is sigma_min(i+l) since a scalar _sr(ii) = min(svd(sr));% this is sigma_min(i+invl) since a scalar end % Compute the min(min(singluar values)) rd_min = min(_rd); sr_min = min(_sr);

Robust and Adaptive Control % For part b) we need to compute the singular value stability margins neg_gm = *log( min([ (/(+rd_min)) (-sr_min)])); % in db pos_gm = *log( ma([ (/(-rd_min)) (+sr_min)])); % in db pm = 8*(min([*asin(rd_min/) *asin(sr_min/)]))/pi;% in deg o plot the multivariable Nyquist results, plot the real and imaginary components of _mnt and count encirclements. 3 MN Problem 5.a Imag - - We see no encirclements. -3-3 - - 3 Real I L versus frequency, con- b) Plot the minimum singular values of I L and verting to db.

8 Problem 5.b 7 Min Singular Value (I+L) db 6 5 4 3 min =.38867 - -3 - - 3 Frequency (rps) 4 35 Problem 5.b Min Singular Value (I+inv(L)) db 3 5 5 5 min =.4566, he singular value stability margins are computed using Eq. (5.53) and (5.54). min I L.3887 min I L.457. he singular value stability margins are: GM = [-5.348 4.746] db, PM = +/- 44.4 deg. Eercise 5.. a) Neglect r. From figure write loop equations and form z M w, z K, s zw, z Ks zw z K z zks z Ks w,, M w s K w where M should be the negative of the closed loop transfer function at the plant input loop break point. he state space model is not unique. Here we choose AM K; BM ; CM K; DM. K M s CM si AM BM DM. s K b) For stability we need the pole in the transfer function to be stable. hus K. c) From equation (5.9) we plot -5 - -3 - - 3 Frequency (rps) M :

4 Robust and Adaptive Control Mag db d) he small gain theorem (5.9) says that M. he above plot shows that as long as this will be satisfied. If we compute the closed loop transfer function from command to output in problem figure i), we have K K s As long as, the uncertainty cannot create a negative coefficient on the gain in the denominator. Eercise 5.3. a) Populate the problem data into the plant (5.8) and controller (5.8). hen use (.4) to form the closed loop system. % Pitch Dynamics Vm = 886.78; ZapV = -.346; ZdpV = -.4; Ma = 47.79; Md = -4.8346; Ka = -.5; Kq = -.3; az =.; aq = 6.; w = logspace(-3,3,5); t =.:.:.; Breaks at = r/s % Plant states are AOA (rad) q (rps) % Input is fin deflection (rad) % Output is Accel Az (fps) and pitch rate q (rps) A = [ ZapV.; Ma.]; B = [ZdpV; Md]; C = [Vm*ZapV.;..]; D = [Vm*ZdpV;.]; % Controller uses PI elements to close the pitch rate loop and the Accel % loop Ac = [..; Kq*aq.]; Bc = [ Ka*az.; Ka*Kq*aq Kq*aq]; Cc = [ Kq.];

Dc = [ Ka*Kq Kq]; % Form closed loop system [kl,kl]=size(d*dc); Z=inv(eye(kl)+D*Dc); [kl,kl]=size(dc*z*d);ee=eye(kl)-dc*z*d; [kl,kl]=size(z*d*dc);ff=eye(kl)-z*d*dc; Acl=[(A-B*Dc*Z*C) (B*EE*Cc); (-Bc*Z*C) (Ac-Bc*Z*D*Cc)]; Bcl=[(B*EE*Dc); (Bc*FF)]; Ccl = [(Z*C) (Z*D*Cc)]; Dcl=(Z*D*Dc); % Step Response y=step(acl,bcl,ccl,dcl,,t); az = y(:,); q = y(:,);

6 Robust and Adaptive Control. Problem 5.3 Accel Step Response Settling ime =.95.8 Accel Az (fps).6.4 Rise ime =.39. -..5.5 ime (sec) -3 Problem 5.3 Accel Step Response -. -.4 -.6 Pitch Rate q (rps) -.8 - -. -.4 -.6 -.8 -.5.5 ime (sec) b),, c c c c L KG K C si A B D G C si A B D % Break the loop at the plant input A_L = [ A.*B*Cc; -Bc*C Ac]; B_L = [ B; Bc*D]; C_L = -[ -Dc*C Cc]; % change sign for loop gain D_L = -[ -Dc*D]; theta = pi:.:3*pi/; =cos(theta);y=sin(theta); [re,im]=nyquist(a_l,b_l,c_l,d_l,,w); L = re + sqrt(-)*im; rd_min = min(abs(ones(size(l))+l)); sr_min = min(abs(ones(size(l))+ones(size(l))./l));

3 Problem 5.3 Nyquist Imaginary - - -3-3 - - 3 Real 5 Problem 5.3 Nyquist Imaginary 4 3 - - -3-4 -5-5 5 Real

8 Robust and Adaptive Control 6 Problem 5.3 Return Difference I+L 4 Magnitude (db) 8 6 4.99 - -3 - - 3 Frequency (rad/s) Problem 5.3 Stability Robustness I+INV(L) 5 Magnitude (db) 5.754-5 -3 - - 3 Frequency (rad/s) he singular value stability margins are computed using Eq. (5.53) and (5.54). % Compute singluar value margins neg_gm = *log( min([ (/(+rd_min)) (-sr_min)])); % in db pos_gm = *log( ma([ (/(-rd_min)) (+sr_min)])); % in db pm = 8*(min([*asin(rd_min/) *asin(sr_min/)]))/pi;% in deg Min Singular value I+L =.99. Min Singular value I+inv(L) =.754. Singular value gain margins = [-.85 db, +.8393 db ]. Singular value phase margins = +/-44.37 deg. c) From Figure 5.3, we want to model the actuator using an uncertainty model at the plant input : I. Note that we have a negative sign on the summer, which is different from Figure 5.3. he modified block diagram would be:

+ - K z + w + G he problem requires to model the actuator using a first order transfer function. We equate these and solve for the uncertainty: s c s I, s s s d) Form the M analysis model where the uncertainty is from part c). he M matri from (5.7) can be easily modified here (due to the negative feedback). It is derived as follows:,, z KG z w I KG z KGw z I KG KGw M I KG KG e) Here we pick a time constant for the actuator transfer function in part c) and plot M. For this problem I L, since L is a M scalar, and we can reuse the result from b). M Problem 5.3 SG Actuator Analysis) - Magnitude (db) -4-6 -8 I+invL min =.754 tau =.5 s tau =. s tau =.8 s - - -3 - - 3 Frequency (rad/s) From the figure we see that an actuator with time constant of.8 sec overlaps (red curve). Eercise 5.4. Matlab code from eample 5. can be used here. he Bode plots:

Robust and Adaptive Control 8 Problem 5.4 Bode 6 4 Magnitude (db) - Phase (deg) he Nyquist curve: -4-3 - - 3-8 - - -4-6 -8 - - -4-6 Frequency (rad/s) Problem 5.4 Bode -8-3 - - 3 3 Frequency (rad/s) Problem 5.4 Nyquist Imag - - -3-3 - - 3 Real

8 Problem 5.4 Return Difference I+L 7 6 Magnitude (db) 5 4 3-3 - - 3 Frequency (rad/s) Problem 5.4 Stability Robustness I+INV(L) 35 3 5 Magnitude (db) 5 5.6774 and singular value plots. Eercise 5.5. Form the closed loop system model A cl -5-3 - - 3 Frequency (rad/s) Acl using (.4)..4355.946.754.357 6.345 36.96 36.96 5.358 3.889.6.6.67 3.666.6.6.69 Use (5.7) through (5.8) to isolate the uncertain parameters in A cl and form the matrices E i to build AM, BM, C M. % Build State space matrices for M(s) [ncl,~]=size(acl); E =.*ones(ncl,ncl); E(:,) = Acl(:,); E(,) =.; E(3,) =.;

Robust and Adaptive Control [U,P,V]=svd(E); b=sqrt(p(,))*u(:,); a=sqrt(p(,))*v(:,).'; E =.*ones(ncl,ncl); E(:,3) = Acl(:,3); E(,3) =.; E(3,3) =.; [U,P,V]=svd(E); b=sqrt(p(,))*u(:,); a=sqrt(p(,))*v(:,).'; E3 =.*ones(ncl,ncl); E3(,) = Acl(,); [U3,P3,V3]=svd(E3); b3=sqrt(p3(,))*u3(:,); a3=sqrt(p3(,))*v3(:,).'; E4 =.*ones(ncl,ncl); E4(,3) = Acl(,3); [U4,P4,V4]=svd(E4); b4=sqrt(p4(,))*u4(:,); a4=sqrt(p4(,))*v4(:,).'; % M(s) = Cm inv(si - Am) Bm Am=Acl; Bm=[b b b3 b4]; Cm=[a;a;a3;a4]; B M -.734.59.984 4.49 6.756.4539, CM -4.49.8476.453 6.756 Using the model for M s compute the structure singular value and the small gain theorem bound. Etract the minimum value of / and / M.

6 5 min(/mu) =.654 min(/sigma(m)) =.85765 4 3 - - -3 - Chapter 6 Eercise 6.. Use file Chapter6_Eample6p_rev.m to solve this problem. Eercise 6.. Use file Chapter6_Eample6p.m to solve this problem. Eercise 6.3. Use file Chapter6_Eample6p3.m to solve this problem. Chapter 7 Eercise 7.. Substituting the adaptive controller ˆ ˆ a kp pkp p cmd cmd ˆ kp p p p pref L kˆ p p p L sgn a sgn pcmd pcmd cmd ref a and the reference model p ref aref pref bref pcmd into the open-loop roll dynamics p L pl, gives the closed-loop system (7.8). o derive (7.9), use p a a p pref e in the error dynamics (7.8), and in the adaptive laws (7.4). Since the system unknown parameters are assumed to be constant, the second and the

4 Robust and Adaptive Control third equations in (7.9) immediately follow. he stabilization solution (7.3) is a special case of (7.8) when the eternal command is set to zero. Chapter 8, starting at Eercise 8.. he trajectories of sgn t, can be derived by direct integration of the system dynamics on time intervals where the sign of t remains constant: t sgn d tt sgn t t. t Suppose that for some. hen equating the left hand side of the trajectory equation to zero, gives t sgn. Multiplying both sides by sgn, results in t, and so t, which implies that any two solutions with the same value of become zero at the same time. Finally, it is evident that all solutions are discontinuous at any instant in time when, and thus the solutions are not continuously differentiable. t 3 Eercise 8.3. Rewrite the system dynamics as 3 d t. Since t t d t d 3 3 t t then t t, and consequently is the system unique trajectory, with the initial condition. he system phase portrait is easy to draw. It represents a monotonical- 3 ly strictly increasing cubic polynomial f 3 3 ly Lipschitz. Indeed, since. hese dynamics are local- f f y 3 y 3 y y y then L f f y L y, for any, y with y y, where L is 3 a finite positive constant, that is the system dynamics are locally Lipschitz. How- f 3 is unbounded. ever, the dynamics are not globally Lipschitz since t. hen there must eist a finite time instant t Eercise 8.4. Suppose that the scalar autonomous system f has a unique non-monotonic trajectory

where t f t. herefore for all t t t is constant, which in turn contradicts the argument of the trajectory being non-monotonic. Eercise 8.5. If det A then the system A has the unique isolated equi- A. Otherwise, the equilibrium set of the system is de- librium, fined by the linear manifold A, implying an infinite number of solutions. Eercise 8.6. rajectories of the scalar non-autonomous system a t with the initial condition t can be written as t ep ad. he t system equilibrium point is. Suppose that at is continuous in time. In order to prove stability in the sense of Lyapunov, one needs to show that for any there eists,t such that t t for all t t. Based on the eplicit form of the solution, it is easy to see that boundedness of the system t trajectories is assured if and only if M t sup ep ad tt. In this t M t t lim case, it is sufficient to select,t. If a d t then asymptotic stability takes place. If in addition, M t is bounded as a function of t t then t and the asymptotic stability,t can be chosen independent of becomes uniform. Eercise 8.7. Since g y then,, V yg y dy ydy his proves global positive definiteness of V. Clearly t lim V, and so V is also radially unbounded. Differentiating V along the system trajectories, gives:

6 Robust and Adaptive Control g g g g g g g g g V g So, V is a Lyapunov function and the origin is globally uniformly stable. o prove asymptotic stability, consider the set of points E where V. On this set. If a trajectory gets trapped in E then. Outside of the set, V and consequently trajectories enter E and then go to the origin. hese dynamics can also be eplained using LaSalle s Invariance Principle [] for autonomous systems. he principle claims that the system trajectories will approach the largest invariant set in E. Here, this set contains only one point the origin. he overall argument proves global asymptotic uniform stability of the system equilibrium. Eercise 8.8. Due to the presence of the sign function, the system dynamics are discontinuous in and thus, sufficient conditions for eistence and uniqueness do not apply. However, the system solutions can be defined in the sense of Filippov []. his leads to the notion of the sliding modes in dynamical systems. Consid- s, whose dynamics along the system trajectories are: er the manifold sgn sgn s s In Eercise 8., the same eact dynamics were considered. It was shown that starting from any initial conditions, the system trajectories s t reach zero in finite time. In terms of the original system, it means that the system state t reaches the linear manifold s t in finite time. On the manifold t t, and so, which in itself implies that after reaching the manifold, the trajectories slide down to the origin. Eercise 8.9. Let f : R R represent a scalar continuously differentiable function whose derivative is bounded: f C. hen for any and there eists, such that f f f C

f f, for any. his proves uniform conti- C and so nuity of f on R. We now turn our attention to proving Corollary 8., where by the assumption, a scalar function f t is twice continuously differentiable, with bounded second derivative f t. herefore, the function first derivative f t is uniformly continuous. In addition, it is assumed that f t has a finite limit, as t. hen due to Barbalat s Lemma, tends to zero, which proves the corollary. f t Eercise 8.. Substituting u K ˆ t K rt into the system dynamics, gives ˆ Ab K t K K r r t K where A is Hurwitz, and K Kˆ K is the feedback gain estimation error. he desired reference dynamics are ref A ref bk r r. Subtracting the latter from the system dynamics, results in e AebK, with the state tracking error e ref. Also, the feedback gain dynamics can be written as ˆ K e Pb. For the closed-loop system error system, e AebK ˆ K e Pb consider a Lyapunov function candidate Ve, Ke Pe K K. Clearly, this function is positive definite and radially unbounded. Its derivative along the closed-loop system error dynamics is, V e K e Pee Pe K Kˆ AebK Pee P AebK K e Pb e A P PAe e Qe Q r

8 Robust and Adaptive Control his proves: a) Uniform stability of the system error dynamics; and b) Uniform boundedness of the system errors, that is e, K L. Since the eternal input rt is bounded then ref e L. hen u L and consequently, the error and the state derivatives are uniformly bounded as well, e, L. Since V e, K e Qe L then V et, Kt is a uniformly continuous function of time. Note that V is lower bounded and its time derivative is non-positive. Hence, V tends to a limit, as a function of time. Using Barbalat s V e t, K t e Qe, which is equivalent to L, and so is ref Lemma, we conclude that et. he latter proves bounded asymptotic command tracking ability of t the controller, that is t ref t, for any bounded eternal command rt. t Consequently, C y t y t C t. Finally, if K CA b then t ref the DC gain of the reference model is unity. In this case, both the system state and the reference model state ref will asymptotically track any eternal bounded command r with zero steady-state errors. ref t r Chapter 9 Eercise 9.. Let some of the diagonal elements i of in the system dynamics (9.4) be negative and assume that the signs of them are known. Consider the modified Lyapunov function (9.43), V e, K, Kr, epetr K K Kr r K r where denotes the diagonal matri with positive elements i. It is easy to see that this function is positive definite and radially unbounded. Let sgn diagsgn,, sgn m. hen sgn. Repeating derivations (9.55) through (9.58) with Bsgn in place of B, gives the modified adaptive laws. Chapter Eercise.. We need to prove that

G C A B I ref ref ref mm with a Hurwitz matri A AB K and ref So, A ref mm Cp I mm mm A, B, Bref n p m A p B p np m m m p C, where indicates do not care elements. Denote M M Aref Bref M where M, M are of the corresponding dimensions. hen Imm mm CpM CpM Bref Aref M n p m M and thus CM p Im m. Finally, M G C A B C C M I M ref ref ref mm p p mm M and so, the reference model DC gain is unity. Eercise.. Starting with the adaptive controller ˆ ˆ u K p adaptive law ˆ K Kˆ K. hen e PB, the is initialized at the baseline gain values Kˆ K e d PB K K t ˆ Kˆ where Kˆ represents an adaptive incremental gain whose adaptive law dynamics are the same as for the original adaptive gain K ˆ. he resulting total control input,

3 Robust and Adaptive Control ˆ ˆ ˆ ˆ ˆ ˆ u K p K K p ubl K p represents an adaptive augmentation of a baseline linear controller u K. Eercise.3. Consider the error dynamics e Aref eb ubl, p assume that the etended regressor ubl, p arguments. hen uad and is continuously differentiable in its ubl, p ubl, p e Aref e B t ubl, p u bl p ubl p Aref eb e Pref B K p ubl p he first term in the right hand side of the equation is uniformly bounded. Also, e t all the functions in the second term are uniformly bounded. herefore and e t is a uniformly continuous function of time. Since in addition bl L e t tends to zero then using Barbalat s Lemma we conclude that e t tends to zero as well. hen the error dynamics imply: lim t ubl t, t. t Chapter Eercise.. he Projection Operator, as defined in (.37), is continuous and differentiable in its arguments. Consequently, it is locally Lipschitz. Eercise.3. Since Proj, Proj, y y y y it is sufficient to show that y. hen min ma i i i i i i i i Proj, y. By the assumption, i i i i

Proj, y y i i i i ma i i ma i i yi, i i yi min i i min i i yi, i i yi, otherwise he above inequality allows one to use the adaptive laws (.53) and carry out the proof of the system UUB properties, while repeating the eact same arguments as in Section.5. Chapter Eercise.. here are several ways to embed a baseline linear controller into the overall design and then turn the adaptive system from able. to act as an augmentation to the selected baseline controller. Our preferred way to perform such a design starts with the assumed open-loop system linear dynamics without the uncertainties. A robust linear feedback controller u bl K K r r can be designed for the linear system. hen the corresponding closed-loop linear system dynamics become the reference model for the adaptive system to achieve and maintain in the presence of uncertainties. So in (.4), the reference model matrices Aref, B ref are defined as the closed-loop linear system matrices achieved under the selected linear baseline controller. he total control input is defined similar to (.6) uad ˆ ˆ ˆ u u K K r u ur bl r sl u Repeating the derivations from Section.3, the open-loop system with the imbedded baseline controller becomes very similar to (.47,.48). In this case, the adaptive component will depend on the baseline controller, as shown in (.5). Starting from (.4) and repeating the same design steps will result in adaptive

3 Robust and Adaptive Control laws similar to (.37). he only difference here will be in the addition of adaptive dynamics on the baseline gains, as shown in (.66). Alternatively, one can choose to initialize the adaptive gains K ˆ, ˆ K r at the corresponding gain values K, K r of the selected baseline controller, and then use the arguments from Eercise. to justify the design. However, in practical applications this approach may result in unnecessary numerical complications if the gains of the baseline controller are scheduled to depend on the system slowly time-varying parameters. Eercise.4. First, set I f t mm, p m, and design a baseline linear feedback controller in the form u bl n K for the assumed nominal linear open-loop dynamics. Note, that due to the inclusion of the integrated output tracking error system into the design, the baseline linear controller has classical ycmd y (Proportional + Integral) feedback connections: ubl KI Kp p. s Second, form the reference system to represent the closed-loop baseline linear dynamics with the embedded baseline linear controller. I A BK y ref mm ref nm m Aref cmd B hird, define the total control input without the adaptive component on the eternal command. uad ˆ ˆ u u K y u bl cmd sl u Finally, follow the rational from Eercise., and repeat the design steps from Section.4, starting with the modified controller definition. ref Chapter 3 Eercise 3.. Differentiating the error dynamics (3.5), gives

ˆ ˆ e a e b k k k rk r ref r r a eb e k er k r ref r r r t is uniformly bounded in time If r L then e L, since it has already been proven that all signals in the right hand side of the above equation are uniformly bounded. herefore, e t is a uniformly continuous function of time. Also it was proven that lim et. Using Barbalat s Lemma, implies lim et, t t and so, the signal in the error dynamics (3.5), t must asymptotically tend to zero. e a eb k k r ref r t Eercise 3.. Consider the system (3.3). z A z f z,, t with Hurwitz A. he system solution is: A t t t A z t e z e f z,, d From an engineering perspective, these dynamics can be viewed as a linear stable filter with f zt,, as the input. So, if the latter decays to zero then the state of the filter will do the same. his argument can also be proven formally using the control-theoretic arguments from the given reference. Eercise 3.3. Use the results from Eercise 3.. In this case, the error dynamics (3.) does not eplicitly depend on the eternal command ycmd t. Without assuming continuity or even differentiability of the latter, one can differentiate the error dynamics (3.), show that e L and then repeat the arguments from Eercise 3. to prove that lim t. t

34 Robust and Adaptive Control Chapter 4 Eercise 4.. Suppose that all states are accessible, i.e., C In n. Disregarding the Projection Operator, the output feedback (baseline + adaptive) control input, the adaptive laws, and the corresponding reference model (aka, the state observer) become (see able 4.): ˆ u ubl ˆ, ubl ˆ ˆ Proj, ˆ, u ˆ bl R W S ˆ A ˆL ˆ B z ref v ref cmd At the same time, a state feedback MRAC system can be written as, (see able.): ˆ u ubl, ubl ˆ Proj ˆ,, ubl ref PB A B z ref ref ref ref cmd Clearly, the main difference between the two systems is in the use of the state observer for the former, as oppose to the ideal reference model for the latter. Another difference consists of employing the estimated / filtered state feedback in the observer-based adaptive system versus the original system state feedback in MRAC. As it was formally shown in Chapters 3 and 4, these two design features give the observer-based adaptive system a distinct advantage over the classical MRAC: It is the ability to suppress undesirable transients. Eercise 4.. Instead of using (4.4), consider the original dynamics (4.7),, AB u B z yc z C and the state observer, ˆ ref cmd z ˆ AˆB ˆ u ˆ L y yˆ B z, yˆ Cˆ v ref cmd where ˆ mm, ˆ Nm R R are the parameters to be estimated. Choosing the control input,

ˆ u gives the linear state observer dynamics. ˆ ˆ AˆL y yˆ B z, yˆ Cˆ v ref cmd Subtracting the observer form the system, results in the observer error dynamics, ˆ ˆ e AL C e B v which are very similar to (4.4). Repeating all the design steps after (4.4) will result in the pure (no baseline included) adaptive laws alike (4.47). with ˆ ˆ ˆ ˆ Proj, ˆ ey R ey y y C Ce. he rest of the design steps mirror the ones in Section 4.4. Eercise 4.3. For the open-loop system, W S ABuK, yc, zcz with an unknown strictly positive definite diagonal matri control input in the form u K K K z z Im m, there eists a n m with unknown gains K, K R m m and Kz R that would have resulted in the desired closed-loop stable reference dynamics, ABK BKz z Aref where Aref, B ref are the two known matrices that define the desired closed-loop dynamics. his is an eistence-type statement. Its validity is predicated on the fact that the pair A, B is controllable and the two desired matrices are chosen to satisfy the Matching Conditions (MC): A ABK, B B K. he MC Bref ref ref z

36 Robust and Adaptive Control impose restrictions on the selection of achievable dynamics for the original system, which in turn can be rewritten as: Aref BuK K Kz zbref z K K Kz z, z A B u, z B z ref K K nmm where R denotes the aggregated set of unknown parameters in the system. With that in mind, consider the state Kz observer ref ˆ v ˆ A ˆBˆ u ˆ, z L y yˆ, yˆ C ˆ ref ˆ n m m where R is the vector of to-be-estimated parameters. Choosing, ˆ u ˆ, z yields the linear time-invariant state observer dynamics. ˆ A ˆL y yˆ B z, yˆ Cˆ ref v ref With the state and output errors defined as in (4.3), the observer error dynamics e ˆ ˆ y y y C Ce ˆ ˆ,, e A LCe B z z ref v are in the form of (4.4), with z in place of u bl and e A ref instead of A. Repeating the arguments starting from (4.4) will give stable output feedback observerbased adaptive laws in the form of (4.47),

ˆ ˆ Proj, ˆ, zey R W S along with the associated proofs of stability and bounded tracking performance.