State space control for the Two degrees of freedom Helicopter

Similar documents
The Control of an Inverted Pendulum

The Control of an Inverted Pendulum

The control of a gantry

State Feedback MAE 433 Spring 2012 Lab 7

Automatic Control II Computer exercise 3. LQG Design

SRV02-Series Rotary Experiment # 7. Rotary Inverted Pendulum. Student Handout

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

Laboratory 11 Control Systems Laboratory ECE3557. State Feedback Controller for Position Control of a Flexible Joint

Double Inverted Pendulum (DBIP)

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

Lab 6a: Pole Placement for the Inverted Pendulum

Department of Electrical and Computer Engineering. EE461: Digital Control - Lab Manual

Coordinated Tracking Control of Multiple Laboratory Helicopters: Centralized and De-Centralized Design Approaches

Linear Experiment #11: LQR Control. Linear Flexible Joint Cart Plus Single Inverted Pendulum (LFJC+SIP) Student Handout

Rotary Motion Servo Plant: SRV02. Rotary Experiment #11: 1-DOF Torsion. 1-DOF Torsion Position Control using QuaRC. Student Manual

YTÜ Mechanical Engineering Department

SRV02-Series Rotary Experiment # 1. Position Control. Student Handout

Inverted Pendulum: State-Space Methods for Controller Design

Linear State Feedback Controller Design

Multivariable Control Laboratory experiment 2 The Quadruple Tank 1

CONTROL DESIGN FOR SET POINT TRACKING

State Feedback Controller for Position Control of a Flexible Link

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Example: DC Motor Speed Modeling

Control Systems Design

ES205 Analysis and Design of Engineering Systems: Lab 1: An Introductory Tutorial: Getting Started with SIMULINK

5HC99 Embedded Vision Control. Feedback Control Systems. dr. Dip Goswami Flux Department of Electrical Engineering

EE 474 Lab Part 2: Open-Loop and Closed-Loop Control (Velocity Servo)

YTÜ Mechanical Engineering Department

Solution to HW State-feedback control of the motor with load (from text problem 1.3) (t) I a (t) V a (t) J L.

Lecture 12. Upcoming labs: Final Exam on 12/21/2015 (Monday)10:30-12:30

Lab 3: Quanser Hardware and Proportional Control

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

2.004 Dynamics and Control II Spring 2008

Autonomous Mobile Robot Design

Example: Modeling DC Motor Position Physical Setup System Equations Design Requirements MATLAB Representation and Open-Loop Response

Topic # Feedback Control Systems

Lab 6d: Self-Erecting Inverted Pendulum (SEIP)

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant

Lab 5a: Pole Placement for the Inverted Pendulum

Quanser NI-ELVIS Trainer (QNET) Series: QNET Experiment #02: DC Motor Position Control. DC Motor Control Trainer (DCMCT) Student Manual

Inverted Pendulum System

Control 2. Proportional and Integral control

EE 16B Midterm 2, March 21, Name: SID #: Discussion Section and TA: Lab Section and TA: Name of left neighbor: Name of right neighbor:

Rotary Inverted Pendulum

Chap 8. State Feedback and State Estimators

Table of Contents 1.0 OBJECTIVE APPARATUS PROCEDURE LAB PREP WORK POLE-PLACEMENT DESIGN

QNET Experiment #04: Inverted Pendulum Control. Rotary Pendulum (ROTPEN) Inverted Pendulum Trainer. Instructor Manual

Control. CSC752: Autonomous Robotic Systems. Ubbo Visser. March 9, Department of Computer Science University of Miami

LQ Control of a Two Wheeled Inverted Pendulum Process

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators.

ECE 320 Linear Control Systems Winter Lab 1 Time Domain Analysis of a 1DOF Rectilinear System

EE 422G - Signals and Systems Laboratory

Control System Design

1 Steady State Error (30 pts)

Intermediate Process Control CHE576 Lecture Notes # 2

Feedback Control part 2

CONTROL SYSTEMS LABORATORY ECE311 LAB 1: The Magnetic Ball Suspension System: Modelling and Simulation Using Matlab

Computer Aided Control Design

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

ECE-320: Linear Control Systems Homework 8. 1) For one of the rectilinear systems in lab, I found the following state variable representations:

EE 380 EXAM II 3 November 2011 Last Name (Print): First Name (Print): ID number (Last 4 digits): Section: DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

MODERN CONTROL DESIGN

EEE 184: Introduction to feedback systems

1 (30 pts) Dominant Pole

Laboratory handout 5 Mode shapes and resonance

AE2610 Introduction to Experimental Methods in Aerospace DYNAMIC RESPONSE OF A 3-DOF HELICOPTER MODEL

Robust Optimal Sliding Mode Control of Twin Rotor MIMO System

DOUBLE ARM JUGGLING SYSTEM Progress Presentation ECSE-4962 Control Systems Design

PID Control. Objectives

Digital Control: Part 2. ENGI 7825: Control Systems II Andrew Vardy

EEL2216 Control Theory CT1: PID Controller Design

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

THE REACTION WHEEL PENDULUM

Simulink Modeling Tutorial

EL2450: Hybrid and Embedded Control Systems: Homework 1

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Pitch Rate CAS Design Project

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Inverted Pendulum. Objectives

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Mechanical Engineering 2.04A Systems and Controls Spring 2013

Lab #2 - Two Degrees-of-Freedom Oscillator

DESIGN OF LINEAR STATE FEEDBACK CONTROL LAWS

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Appendix A MoReRT Controllers Design Demo Software

DC-motor PID control

Lab 1: Dynamic Simulation Using Simulink and Matlab

Designing Information Devices and Systems II Spring 2017 Murat Arcak and Michel Maharbiz Homework 9

System Modeling: Motor position, θ The physical parameters for the dc motor are:

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

University of Utah Electrical & Computer Engineering Department ECE 3510 Lab 9 Inverted Pendulum

Eigenstructure Assignment for Helicopter Hover Control

CIS 4930/6930: Principles of Cyber-Physical Systems

Chemical Kinetics I: The Dry Lab. Up until this point in our study of physical chemistry we have been interested in

Chap. 3. Controlled Systems, Controllability

Transcription:

State space control for the Two degrees of freedom Helicopter AAE364L In this Lab we will use state space methods to design a controller to fly the two degrees of freedom helicopter. 1 The state space model The equations of motions for our two degrees of freedom Helicopter are given by θ+jy sin(θ)cos(θ) ψ 2 +mg(hsin(θ)+r c cos(θ))+c p θ = lk pp v p +k py v y (1) ( Jp cos(θ) 2 +J shaft ) ψ 2Jp cos(θ)sin(θ) θ ψ +c y ψ = lk yy v y cos(θ)+k yp v p cos(θ). The linearized equations of motion for the helicopter about level flight are determined by θ+cp θ+mghθ = lkpp δv p +k py δv y ψ +cy ψ = lk yy δv y +k yp δv p = +J shaft δv p = v p v pe δv y = v y v ye (2) 1

Recall that v pe is the pitch equilibrium voltage, while v ye is the yaw equilibrium voltage given by v pe = mg(hsin(θ d)+r c cos(θ d ))lk yy lk pp lk yy k py k yp v ye = mg(hsin(θ d)+r c cos(θ d ))k yp lk pp lk yy k py k yp where θ d is the desired pitch angle. This v pe chosen so that θ d is the steady state solution for the first nonlinear equation in (1). Now let us define the following state variables x 1 = θ, x 2 = ψ, x 3 = θ and x 4 = ψ. In vector notation the state x is given by ẋ 1 θ x = ẋ 2 ẋ 3 = ψ θ. (3) ẋ 4 ψ Using the linearized equations about level flight in (2), we obtain ẋ 3 = θ = mgh θ c p θ+ lk pp δv p + k py δv y = mgh x 1 c p x 3 + lk pp δv p + k py δv y ẋ 4 = ψ = c y ψ + lk yp δv p + k yy δv y = c y x 4 + lk yp δv p + k yy δv y. 2

Combining this with ẋ 1 = θ = x 3 and ẋ 2 = ψ = x 4, we arrive at the following state space system: ẋ 1 1 x 1 ẋ 2 ẋ 3 = 1 mgh c p x 2 x 3 + [ ] δvp lk pp k py. (4) ẋ 4 c δv y y x 4 In other words, the linearized equations of motion about level flight admit a state space representation of the form ẋ = Ax+Bv. In this setting, 1 1 A = mgh c p, B = [ lk pp k py c and v = δvp δv y y lk yp k yy lk yp k yy ]. (5) In order to move θ to θ d and ψ to ψ d, we have to incorporate an integral controller into our state feedback. We will add two additional states to this system. To this end, let us define two new state variable x 5 = x 6 = t t (θ(σ) θ d )dσ ẋ 5 = x 1 θ d (ψ(σ) ψ d )dσ ẋ 6 = x 2 ψ d. (6) Recall that x 1 (t) = θ(t) is the pitch angle, and x 2 (t) = ψ(t) is the yaw angle. As expected, θ d is the desired pitch angle and ψ d is the desired yaw angle that we want to fly the helicopter to. Using ẋ 5 = x 1 θ d and ẋ 6 = x 2 ψ d, the state variable system in (4) is now determined by [ ] θd ẋ = A i x+b i v W. (7) ψ d 3

The new state x(t) in R 6, state matrix A i and B i are given by A A i = [ B ] [ ] [ and B i = ] 1 1 θ(t) ψ(t) θ(t) x(t) = and W = ψ(t) t (θ(σ) θ. (8) d)dσ 1 t (ψ(σ) ψ d)dσ 1 Using the matrices A and B in (5) one can easily construct A i, B i and W in Matlab. Recall that you already have A and B in Matlab from your setup file. In Matlab the matrices A i and B i are computed by A i = [A,zeros(4,2);1,,,,,;,1,,,,] B i = [B;,;,]. Finally, it is noted that A i is a matrix acting on R 6 and B i is a matrix mapping R 2 into R 6. In other words, A i is a 6 6 matrix and B i is a 6 2 matrix. 1.1 Error feedback To design a state feedback controller, let x d be the constant vector in R 6 given by x d = θ d ψ d. 4

It is emphasized that the desired pitch angle θ d and desired yaw angle ψ d are assumed to be constants. So x d is a constant vector in R 6. Now let x = x x d be the error between x and x d. It turns out that the pair {A i,b i } is controllable. So one can use linear quadratic regulator theory or pole placement to design a state feedback controller K. In this case, K is a matrix mapping R 6 into R 2, that is, K a 2 6 matrix. In this setting, the input voltage is given by [ ] [ ] vp vpe (θ v = K x = d ) K x v ye (θ d ) v y = mg(hsin(θ d )+R c cos(θ d ))lk yy lk pp lk yy k py k yp mg(hsin(θ d )+R c cos(θ d ))k yp lk pp lk yy k py k yp K x. (9) Substituting v into the new state variable system in (7), we arrive at [ ] θd x = ẋ = (A i B i K) x+a i x d W. (1) Recall that θ d and ψ d are constants. Because K will be chosen in such a way that all the eigenvalues of A i B i K are in the open left hand plane, the state space system in (1) will move to steady state, that is, x = in steady state. In fact, according to the final value theorem, the steady state error is given by ( [ ) t x(t) lim = x = (A i B i K) 1 θd W ] A i x d. In particular, = ẋ 5 = x 1 θ d = θ θ d and = ẋ 6 = x 2 ψ d = ψ ψ d. In other words, in steady state ψ d ψ d θ d = lim t θ(t) and ψ d = lim t ψ(t). Therefore the helicopter will fly to the desired pitch angle θ d and desired yaw angle ψ d. To implement the linear quadratic regulator (LQR) method in this setting, 5

observe that Q and R are diagonal matrices of the form q 1 q 2 [ ] Q = q 3 r1 q 4 and R =, (11) r 2 q 5 q 6 where q j for all j = 1,2,...,6 and r 1 > and r 2 >. Now consider the optimization problem min u = min δv p,δv y (x Qx+u Ru)dt ( n q j x 2 j +r 1 (δv p ) 2 +r 2 (δv y ) )dt 2 j=1 subject to ẋ = A i x+b i v. (12) (Theconjugatetransposeofavectorz isdenotedbyz.) Thecontrolengineer chooses the weights {q j } 6 1 and r 1 > and r 2 >. It is emphasized that the weights r 1 and r 2 must be strictly positive. The optimal control v or solution to this optimization problem is unique and given by v = Kx where K is a state gain matrix. The MATLAB command to compute K is given by K = lqr(a i,b i,q,r). (13) In this case, the closed loop system corresponding to the optimal control v = Kx is given by ẋ = (A i B i K)x. The LQR method guarantees that A i B i K is stable. Finally, it is noted that we are merely just using the LQR method to compute a stabilizing gain K. This gain is in fact not necessarily the optimal solution to the (7). The true optimal solution that solve (7) is discussed in Optimal Control theory. Recall that the place command in MATLAB can be used to place the poles of the closed loop system at specified locations in the complex plane. Since A i is a 6 6 matrix, we can place 6 eigenvalues of A i B i K at any 6 location 6

in the complex plane. To find the corresponding K the MATLAB command is K = place(a i,b i,[λ 1,λ 2,λ 3,λ 4,λ 5,λ 6 ]). Then {λ j } 6 1 are the 6 eigenvalues for A i B i K. 1.2 State feedback In this section we will consider a controller that simply feedback the states. As before, consider the linear system given by [ ] θd ẋ = A i x+b i v W. (14) Now let us use the state feedback v = Kx, substituting this into (14) yields [ ] θd ẋ = (A i B i K)x W. (15) NowassumethatK ischosensuchthata i B i K isstable. Onecanchoosethe stabilizing K by either the LQR or the pole placement method. If A i B i K is stable, then we claim that in steady state ψ d ψ d θ d = lim t θ(t) and ψ d = lim t ψ(t). (16) Because A i B i K is stable, the system in (15) converges to a constant state, that is, x( ) = lim t x(t). Moreover, since A i B i K is stable, the steady state x( ) is uniquely determined by [ ] θd (A i B i K)x( ) = W. (17) By consulting the structure of A i and B i in (8), we see that [ ] [ ] [ ] [ ] C 4 C A i B i K = on I C 2 and W = : C 2 4. I ψ d C 2 7

Using these structure of A i B i K and W in (17), we arrive at θ d = x 1 ( ) = lim t θ(t) and ψ d = x 2 ( ) = lim t ψ(t). Thereforeinsteadystate, thecontrollerv = Kxwilldrivethepitchangleto θ d and the yaw angle to ψ d. Finally, it is emphasize that when implementing this controller in the nonlinear system, we will include the feed forward term, that is, [ ] [ ] vp vpe (θ = d ) Kx v ye (θ d ) v y = mg(hsin(θ d )+R c cos(θ d ))lk yy lk pp lk yy k py k yp mg(hsin(θ d )+R c cos(θ d ))k yp lk pp lk yy k py k yp 1.3 Error feedback response versus State feedback response Kx. (18) We have observe that in this helicopter model system, if we use the error feedback v = K x then we have fast rise time with an overshoot. On the other hand, if we use state feedback v = Kx then we have a slower rise time but no overshoot. The settling time for both methods are approximately the same. In some applications, one may prefer faster rise time with overshoot, whereas in other applications one may not want an overshoot at all. 8

2 Pre-lab: Due at the beginning of the experiment The following is due at the beginning of the experiment. You will not be allowed to perform the experiment if the pre-lab is not completed. (i) Load the following files in MATLAB: heli_model.mdl heli_model2.mdl setup_heli_lqr_parameters.m HELI_2D_ABCD_eqns.m (ii) Now run the MATLAB file: setup_lab_lqr_parameters.m. Form the matrices A i and B i in MATLAB. Using either the LQR or the pole placement method, find the the state feedback gain K. In the Matlab command line type in this gain as Ki. (iii) Run the Simulink file heli_model.mdl. This Simulink file simulates the helicopter system using the error feedback, that is, v = v e K x; see (9). Adjust your gain(preferably changing the pole positions or the weights of the LQR method) until you find the system response acceptable. Bring this gain K and the simulation results to the lab with you. (iv) Run the Simulink file heli_model2.mdl. This Simulink file simulates the helicopter system using the state feedback, that is, v = v e Kx; see (18). Adjust your gain until you find the system response acceptable. Bring this gain K and the simulation results to the lab with you. You will turn in two sets of gains and simulation result from both error feedback and state feedback at the beginning of the experiment. It is emphasized that the absolute values of all of your gains must be less than forty. 3 Controller testing and experimental improvement In this part you will test the two controllers obtained in the pre-lab on the actual helicopter. The idea is that you will test both of your feedback controllers (error feedback and state feedback) and then modify the gains (changing the 9

pole positions in place command or the weights of LQR in lqr command) to improve actual performance. For each of your best controllers (error and state feedback) you will save 4 pitch, 4 yaw, and 4x2 voltage signals as described in Parts (II) and (IV) below. The following steps are used to test and improve the two feedback controllers determined in the pre-lab: Caution Notice: Depending on your controller, the closed loop system may be unstable. Anytime you click start in WinCon be ready to hit the stop button if you notice any evidence of instability, that is, oscillations with increasing magnitude. (I) Iteration for error-feedback controller 1. Open WinCon and MATLAB from WinCon. 2. Run setup_heli_lqr_parameters.m 3. Enter Ki corresponding to the error-feedback controller designed in your pre-lab. 4. In WinCon, open the file: Heli_LQR.wcp. In MATLAB, open the file: Heli_LQR.mdl. 5. Set joystick/computer selector to 2. 6. Open the Pitch, Yaw, and Voltage scopes. Set the buffer to 1 seconds. Starting with the Ki that you designed in the pre-lab for error feedback, modify the gains (using place or lqr commands) by iterating steps 7-9 below until you find a good helicopter response. Do not iterate for more than 15 minutes. 7. In the computer-generated-input block, set the gain of the square wave pitch signal generator to 2 degrees and set the other gains to. 8. Click Start. Take 9 seconds of data. Click stop. Analyze response. Set the gain back to. 9. Repeatsteps7and8butsettingthesquarewaveyaw gainto3degrees. 1

(II) Recording data for error-feedback controller 1. Using the best Ki that you have experimentally determined, repeat steps 7 to 9 but this time saving at least 9 seconds of the pitch, yaw, and motor voltages for each of the two tested square wave reference signals. 2. Repeat step (1) but this time changing the input to the pitch and yaw signal generators to the sine wave. The gains of the square wave signals must be set to. By this time you should have saved a total of 4 pitch, 4 yaw, and 4x2 voltage signals. (III) Iteration for state-feedback controller 1. Close all Simulink models. Go to WinCon file menu and click close. 2. Enter Ki corresponding to the state-feedback designed in your pre-lab. 3. In WinCon, open the file: Heli_LQR2.wcp. In MATLAB, open the file: Heli_LQR2.mdl. 4. Set joystick/computer selector to 2 in the Simulink model main window. 5. Open the Pitch, Yaw, and Voltage scopes. Set the buffer to 1 seconds. Starting with the Ki that you designed in the pre-lab for state feedback, modify the gains (using place or lqr commands) by iterating steps 6-8 below until you find a good helicopter response. Do not iterate for more than 15 minutes. 6. In the computer-generated-input block, set the gain of the square wave pitch signal generator to 2 degrees and set the other gains to. 7. Click Start. Take 9 seconds of data. Click stop. Analyze response. Set the gain back to. 8. Repeatsteps6and7butsettingthesquarewaveyaw gainto3degrees. 11

(IV) Recording data for state-feedback controller 1. Using the best Ki that you have experimentally determined, repeat steps 6 to 8 from part (III) but this time saving at least 9 seconds of the pitch, yaw, and motor voltages for each of the two tested square wave reference signals. 2. Repeat step (1) but this time changing the input to the pitch and yaw signal generators to the sine wave. The gains of the square wave signals must be set to. By this time you should have saved a total of 8 pitch, 8 yaw, and 8x2 voltage signals. 4 Closed loop flight testing In this part of the lab you can test your best feedback controller by flying the helicopter with the joystick and trying to follow a path. Here you need to choose only one of the two controller architectures used previously. 1. Close any opened Simulink model. Go to WinCon file menu and click close. 2. Reopen Heli_LQR.mdl or Heli_LQR2.mdl. 3. Change the Joystick/Computer selector to 1. 4. Open the scopes: Pitch+reference and Yaw+Reference. Set the buffer to 12 seconds. In Pitch+reference scope set axis fixed range to [ 3, 3]. In Yaw+reference scope set axis fixed range to [ 9, 9]. 5. In the computer-generated-input block, set the gain of the Pitch sine signal generator to 2 degrees and the other gains to. 6. Click Start in WinCon. The green line in the scopes represent the actual pitch and yaw of the helicopter. The red lines are the reference signals. You need to track the reference signal by flying the helicopter with the joystick. Once you finish, set the gain back to. 7. Repeat steps 5 and 6 but changing the Yaw sine wave gain to 3 degrees. 8. Saving this data is not required. 12

5 Open loop flight Inthispartoftheexperimentyouwilltrytoflythehelicopterwithnocontrol system in place. In other words, you will try to fly the helicopter open loop. As expected, it will be difficult to fly the helicopter with no control system. 1. Close any opened Simulink model. Go to WinCon file menu and click close. 2. Open the file Heli_Open_Loop.wcp from WinCon and open Heli_Open_Loop.mdl from MATLAB. 3. Open the scopes: Pitch+reference and Yaw+Reference. Set the buffer to 12 seconds. In Pitch+reference scope set axis fixed range to [ 3, 3]. In Yaw+reference scope set axis fixed range to [ 9, 9]. 4. In the computer-generated-input block, set the gain of the Pitch sine signal generator to 2 degrees and the other gains to. 5. Click Start in WinCon. The green line in the scopes represent the actual pitch and yaw of the helicopter. The red lines are the reference signals. You need to track the reference by flying the helicopter with the joystick. Once you finish, set the gain back to. 6. Repeat steps 4 and 5 but changing the Yaw sine gain to 3 degrees. 7. Saving data is not required. 6 The laboratory report 1. Discuss the two tested controller architectures. Describe each term, group them in feedback and feedforward terms, and describe the difference between these two groups as well as the difference between the two tested controller architectures. 2. Show two figures to compare the helicopter response from both pitch pulse and yaw pulse commands (Not sinusoid commands) using both controllers. To be precise, the first plot will contain only pitch pulse command and two pitch data, one for each controller architecture. Do 13

not include yaw data. The second plot will contain only yaw pulse command and two yaw data, one for each controller architecture. Do not include pitch data. Make sure the reference input in each plot is the same for all time. These two plots must occupy no more than one page. Choose the controller you think performs better. Justify your choice. 3. Hand in the plots in Part (II) if you chose error feedback or the plots in Part (IV) if you chose state feedback architecture. There are three plots for each reference signal. Put them in the same page. These plots must be (i) the pitch, (ii) the yaw, and (iii) the two voltages. All of these plots must occupy four pages in total. 4. Analyze the voltages generated by the controller and determine if there was saturation. Discuss the effect of saturation in the helicopter response (whether you had saturation or not). 5. Writeasectiondescribingwhatisgoodandbadaboutthecontrollerthat you have designed. You may want to include settling time and overshoot in your discussion about why you think this is a good controller. Describe how you can improve this controller if you had more time. You can even hand in more plots to convince the reader that you have designed a good controller. But for each additional plot you must add an additional sentence with the corresponding explanation. 6. For the last part of the lab report, discuss what you learned about flying the helicopter with the joystick using the feedback controller and open loop. 7. Discuss the need for integrators in the pitch and yaw. 8. Discuss the differences and similarities between state space controller (place or lqr methods) and PID controller. 9. Between the state space and PID controllers you have designed, what controller architecture provided the better helicopter response? Use data obtained from the previous lab to compare both pitch pulse responses (both pitch and yaw angle due to the pitch pulse command). Show both pitch data in one figure, and both yaw data in another figure. This two 14

figures must occupy no more than one page. Make sure they correspond to the same reference input. Notice that both labs used the same set reference signals. 7 Appendix: The convergence of x. In this section, we will show how integral controllers can be used to drive a subspace of the state space to specified locations. Consider the state space system determined by ẋ = Ax+Bv. (19) As expected, A is an operator on the state space X and B maps the input space V into X. Assume that R is a subspace of X. Then X admits an orthogonal decomposition of the form X = R M. Let D be any invertible operator on R. Consider the state space system given by [ẋ ] [ ][ ] [ ] [ ] A x B = + v r. (2) ξ DP R ξ D Notice that we have added the state ξ in R to the system. So X R is the state space for the system in (2). The signal r(t) in R is viewed as the reference signal. The orthogonal projection onto a subspace H is denoted by P H. To simplify some notation, we set z = B i = [ ] x and A ξ i = [ ] [ ] B X : V R [ ] [ ] A X on DP R R [ ] and W = : R D [ ] X. R Using this notation the state space system (2) can be rewritten as ż = A i z +B i v Wr. (21) Finally, the state space Z for (21) admits the following orthogonal decomposition: Z = X R = R M R. (22) Now assume that one can find an operator K mapping Z = X R into V such that A i B i K is stable. Let r d be any fixed vector in R X = R M, 15

and set [ ] r d R z d = M and z(t) = z(t) z d = x(t) rd R ξ(t) [ ] X. R It is emphasized that z, z and z d are all vectors in Z. Moreover, z = z z d is the error between z and z d. Now consider the state feedback v = K z with r(t) = r d, that is, ż = A i z B i K z Wr d. (23) We claim that z(t) converges to a constant vector z as t tends to infinity. Moreover, r d is the first component of z, that is, z admits a decomposition of the form: r d R z = M (24) R where indicates anunspecified entry. Inother words, wehave r d = P R z. To verify this recall that z d is a constant vector. Hence z = ż = (A i B i K) z +A i z d Wr d. (25) Because A i B i K is stable, z(t) converges to a constant vector z as t tends to infinity. In fact, z = (A i B i K) 1 (Wr d A i z d ). (26) Since A i B i K is stable, A i B i K is invertible. To obtain the formula for z in (26), observe that (25) yields t z(t) = e (A i B i K)t z()+ e (A i B i K)(t σ) (A i z d Wr d )dσ = e (A i B i K)t z() (A i B i K) 1 e (A i B i K)(t σ) (A i z d Wr d ) = (A i B i K) 1 (Wr d A i z d ) +e (A i B i K)t z()+(a i B i K) 1 e (A i B i K)t (A i z d Wr d ). By letting t approach infinity, we obtain z = lim t z(t) = (A i B i K) 1 (Wr d A i z d ). 16 σ=t σ=

Therefore z(t) converges to a constant vector z and (26) holds. Since z(t) = z(t)+z d, it follows that z(t) converges to a constant vector z. Finally, it is noted that z = z +z d. We claim that P R z =, or equivalently, r d = P R z. In other words, z admitsadecompositionoftheformin(24). Toseethis, decompose K into K = [ ] K 1 K 2 mapping X R into V. Then A i B i K = [ A BK1 BK 2 DP R ] on [ ] X. R Since R is a subspace of X = R M, the operator A i B i K also admits a matrix decomposition of the form: [ ] [ ] [ ] T R R M A i B i K = :. D M R R Since A i B i K is invertible, the operator T mapping M R into R M must also be invertible. Observe that the last component of Wr d A i z d equals zero, that is, = P R (Wr d A i z d ). This readily implies that the equation [ T D ][ ] = g [ ] Tg = has a unique solution g in M R. In fact, [ ] PR M (Wr d A i z d ) = Wr d A i z d g = T 1 P R M (Wr d A i z d ). In other words, the unique solution of (A i B i K) z = Wr d A i z d is given by z = g. Therefore P R z =. This yields the form of z in (24) and completes the proof. Finally,itisnotedthatif{A,B}iscontrollable,thenitdoesnotnecessarily imply that {A i,b i } is controllable. For a counter example, consider the controllable pair A = [ ] 1 1 2 1 and B = [ ] 1. 1 In this case, A is unstable. Moreover, in this setting 1 1 1 A i = 2 1 and B i = 1. 1 17

Finally, the pair {A i,b i } is not controllable. 18