Topic # /31 Feedback Control Systems

Similar documents
Topic # Feedback Control Systems

Topic # /31 Feedback Control Systems. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

Control Systems Design

Topic # Feedback Control Systems

Topic # /31 Feedback Control Systems

Topic # /31 Feedback Control Systems

Topic # Feedback Control Systems

Topic # Feedback Control Systems

CONTROL DESIGN FOR SET POINT TRACKING

Topic # Feedback Control

Systems Analysis and Control

5. Observer-based Controller Design

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

EE C128 / ME C134 Fall 2014 HW 9 Solutions. HW 9 Solutions. 10(s + 3) s(s + 2)(s + 5) G(s) =

Control System Design

1 (30 pts) Dominant Pole

ECE317 : Feedback and Control

Linear State Feedback Controller Design

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

TRACKING AND DISTURBANCE REJECTION

Pole placement control: state space and polynomial approaches Lecture 2

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

ECEEN 5448 Fall 2011 Homework #4 Solutions

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

MODERN CONTROL DESIGN

Control Systems Design

EECS C128/ ME C134 Final Wed. Dec. 14, am. Closed book. One page, 2 sides of formula sheets. No calculators.

Advanced Control Theory

Dynamic Compensation using root locus method

EEE 184: Introduction to feedback systems

D(s) G(s) A control system design definition

Review: transient and steady-state response; DC gain and the FVT Today s topic: system-modeling diagrams; prototype 2nd-order system

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators.

Control Systems. Design of State Feedback Control.

Introduction to Modern Control MT 2016

Introduction to Feedback Control

Intro. Computer Control Systems: F8

Topic # Feedback Control Systems

Synthesis via State Space Methods

NPTEL Online Course: Control Engineering

Controls Problems for Qualifying Exam - Spring 2014

Module 08 Observability and State Estimator Design of Dynamical LTI Systems

AN INTRODUCTION TO THE CONTROL THEORY

Lec 6: State Feedback, Controllability, Integral Action

ECE 388 Automatic Control

Topic # /31 Feedback Control Systems

Topic # Feedback Control Systems

6.1 Sketch the z-domain root locus and find the critical gain for the following systems K., the closed-loop characteristic equation is K + z 0.

Control Systems I Lecture 10: System Specifications

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Intro. Computer Control Systems: F9

Full State Feedback for State Space Approach

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Exam. 135 minutes, 15 minutes reading time

Professor Fearing EE C128 / ME C134 Problem Set 7 Solution Fall 2010 Jansen Sheng and Wenjie Chen, UC Berkeley

EE C128 / ME C134 Final Exam Fall 2014

Design Methods for Control Systems

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Mechanical Engineering 2.04A Systems and Controls Spring 2013

ECE317 : Feedback and Control

DESIGN OF LINEAR STATE FEEDBACK CONTROL LAWS

Goals for today 2.004

Outline. Classical Control. Lecture 1

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Analysis and Design of Control Systems in the Time Domain

Homework 7 - Solutions

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Time Response Analysis (Part II)

EE 4343/ Control System Design Project LECTURE 10

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

Note. Design via State Space

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Identification Methods for Structural Systems

LECTURE NOTES ON CONTROL

Topic # Feedback Control

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Multivariable MRAC with State Feedback for Output Tracking

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Software Engineering/Mechatronics 3DX4. Slides 6: Stability

Chapter 5 HW Solution

Autonomous Mobile Robot Design

Digital Control: Summary # 7

2.004 Dynamics and Control II Spring 2008

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

to have roots with negative real parts, the necessary and sufficient conditions are that:

6 OUTPUT FEEDBACK DESIGN

Internal Model Principle

Final: Signal, Systems and Control (BME )

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

The control of a gantry

Bangladesh University of Engineering and Technology. EEE 402: Control System I Laboratory

EE C128 / ME C134 Fall 2014 HW 6.2 Solutions. HW 6.2 Solutions

Chapter 3. State Feedback - Pole Placement. Motivation

ECSE 4962 Control Systems Design. A Brief Tutorial on Control Design

Automatic Control 2. Loop shaping. Prof. Alberto Bemporad. University of Trento. Academic year

Inverted Pendulum: State-Space Methods for Controller Design

Engraving Machine Example

1 Steady State Error (30 pts)

Transcription:

Topic #16 16.30/31 Feedback Control Systems Add reference inputs for the DOFB case Reading: FPE 7.8, 7.9

Fall 2010 16.30/31 16 2 Reference Input - II On page 15-6, compensator implemented with reference command by changing to feedback on rather than y(t) e(t) = r(t) y(t) r e u y G c (s) G p (s) So u(t) = G c (s)e(t) = G c (s)(r(t) y(t)) Intuitively appealing because same approach used for classical control, but it turns out not to be the best. Can improve the implementation by using a more general form: ẋ c (t) = A c x c (t) + B c y(t) + Gr(t) u(t) = C c x c (t) + Nr(t) Now explicitly have two inputs to controller (y(t) and r(t)) N performs the same role that we used it for previously. Introduce G as an extra degree of freedom in the problem. Turns out that setting G = BN is a particularly good choice. Following presents some observations on the impact of G

Fall 2010 16.30/31 16 3 First: this generalization does not change the closed-loop poles of the system, regardless of the selection of G and N, since ẋ(t) = Ax(t) + Bu(t), ẋ c (t) = A c x c (t) + B c y(t) + Gr(t) u(t) = C c x c (t) + Nr(t) y(t) = Cx(t) ẋ(t) A BC c x(t) BN = + r(t) ẋ c (t) B c C A c x c (t) G x(t) y(t) = C 0 x c (t) So the closed-loop poles are the eigenvalues of A BC c B c C (same as 15 7 except in a different place, gives same closedloop eigenvalues) regardless of the choice of G and N G and N impact the forward path, not the feedback path A c Second: if N = 0 and G = L = B c, then we recover the original implementation, on 15 6 since the controller reduces to: ẋ c (t) = A c x c (t) + B c (y(t) r(t)) = A c x c (t) + B c ( e(t)) u(t) = C c x c (t) With G c (s) = C c (si A c ) 1 B c, then this compensator can be written as u(t) = G c (s)e(t) as before (since the negative signs cancel).

Fall 2010 16.30/31 16 4 Third: Given this extra freedom, what is best way to use it? One good objective is to select G and N so that the state estimation error is independent of r(t). With this choice, changes in r(t) do not tend to cause such large transients in x (t) For this analysis, take x (t) = x(t) x c (t) since x c (t) xˆ(t) x (t) = ẋ(t) ẋ c (t) = Ax(t) + Bu(t) (A c x c (t) + B c y(t) + Gr(t)) = Ax(t) + B( C c x c (t) + Nr(t)) ({A BC c B c C}x c (t) + B c Cx(t) + Gr(t)) So x (t) = Ax(t) + B(Nr(t)) ({A B c C}x c (t) + B c Cx(t) + Gr(t)) = (A B c C)x(t) + BNr(t) ({A B c C}x c (t) + Gr(t)) = (A B c C)x (t) + BNr(t) Gr(t) = (A B c C)x (t) + (BN G)r(t) Thus we can eliminate the effect of r(t) on x (t) by setting G BN

Fall 2010 16.30/31 16 5 With this choice, the controller: ẋ c (t) = (A BK LC)x c (t) + Ly(t) + BNr(t) u(t) = Kx c (t) + Nr(t) can be rewritten as: ẋ c (t) = (A LC)x c (t) + Ly(t) + Bu(t) u(t) = Kx c (t) + Nr(t) So the control is computed using the reference before it is applied, and that control is applied to both system and estimator. Fourth: if this generalization does not change the closed-loop poles of the system, then what does it change? Recall that zeros of SISO y/r transfer function solve: general det si A BC c BN B c C si A c G = 0 C 0 0 original det si A BC c 0 B c C si A c B c = 0 C 0 0 new det si A BC c BN B c C si A c BN = 0 C 0 0

Fall 2010 16.30/31 16 6 Hard to see how this helps, but consider the scalar new case 14 : C( BC c BN + (si A c )BN) = 0 CBN(BC c (si [A BC c B c C])) = 0 CBN(sI [A B c C]) = 0 So zero of the y/r path is the root of si [A B c C] = 0, which is the pole of the estimator. So by setting G = BN as in the new case, the estimator dynamics are canceled out of the response of the system to a reference command. Cancelation does not occur with original implementation. SISO, multi-dimensional state case handled in the appendix. So in summary, if the SISO system is G(s) = b(s)/a(s), then with DOFB control, the closed-loop transfer function will be of the form: Y (s) T (s) = R(s) Designer determines Φ c (s) and Φ e (s) γ(s)b(s) = K g Φ e (s)φ c (s) Plant zeroes b(s) remain, unless canceled out Many choices for γ(s) the one called new sets it to γ(s) Φ e (s) to cancel out the estimator dynamics from the response. 14 no plant zeros in this case

Fall 2010 16.30/31 16 7 Fifth: select N to ensure that the steady-state error is zero. As before, this can be done by selecting N so that the DC gain of the closed-loop y/r transfer function is 1. 1 y A BC c B C 0 N = I r DC B c C A c B The new implementation of the compensator is ẋ c (t) = A c x c (t) + B c y(t) + BNr(t) u(t) = C c x c (t) + Nr(t) Which has two separate inputs y(t) and r(t) Selection of N ensures good steady-state performance new implementation gives better transient performance.

Fall 2010 16.30/31 16 8 Appendix: Zero Calculation Calculation on 16 5 requires we find (assume r and y scalars) si A det cl B cl = det(si A cl ) det(0 C cl (si A cl ) 1 B cl ) C cl 0 = Φ e (s)φ c (s) det( C cl (si A cl ) 1 B cl ) = Φ e (s)φ c (s) C cl (si A cl ) 1 B cl But finding (si A cl ) 1 can be tricky it is simplified by the following steps. I 0 First note if T = so that T = T 1, then I I (si A cl ) 1 = T T 1 (si A cl ) 1 T T 1 = T (si T A cl T ) 1 T where T A cl T Ā cl = [ A BK BK ] 0 A LC (si A cl ) 1 = T si (A BK) BK 0 si (A LC) 1 T 1 F H F 1 F 1 HG 1 Now use fact that = 0 G 0 G 1 To get that (si A cl ) 1 [ ] (si (A BK)) 1 (si (A BK)) 1 BK(sI (A LC)) 1 = T 0 (si (A LC)) 1 T

Fall 2010 16.30/31 16 9 Now consider new case: C 0 (si A cl ) 1 BN BN [ ] (si (A BK)) 1 (si (A BK)) 1 BK(sI (A LC)) 1 BN = C 0 T T 0 (si (A LC)) 1 BN [ ] (si (A BK)) 1 (si (A BK)) 1 BK(sI (A LC)) 1 BN = C 0 0 (si (A LC)) 1 0 = C(sI (A BK)) 1 BN = Cadj[sI (A BK)]BNΦ c (s) 1 thus in the new case, si A det cl Bcl C 0 cl = Φ e (s)φ c (s) det(c cl (si A cl ) 1 B cl ) = Φ e (s)φ c (s) Cadj[sI (A BK)]BNΦ c (s) 1 = Φ e (s) Cadj[sI (A BK)]BN Whereas in the original case, we get: C 0 (si A cl ) 1 0 L [ ] (si (A BK)) 1 (si (A BK)) 1 BK(sI (A LC)) 1 0 = C 0 T T 0 (si (A LC)) 1 L [ ] (si (A BK)) 1 (si (A BK)) 1 BK(sI (A LC)) 1 0 = C 0 0 (si (A LC)) 1 L = C(sI (A BK)) 1 BK(sI (A LC)) 1 L = Cadj[sI (A BK)] BK adj[si (A LC)]LΦ e (s) 1 Φ c (s) 1 thus in the original case, si A det cl B cl = Φ e (s)φ c (s) det(c cl (si A cl ) 1 B cl ) C cl 0 = Φ e (s)φ c (s) Cadj[sI (A BK)] BK adj[si (A LC)]LΦ e (s) 1 Φ c (s) 1 = Cadj[sI (A BK)] BK adj[si (A LC)]L So new case has Φ e (s) in numerator, which cancels the zero dynamics out of the closed-loop response, but the original case does not.

Fall 2010 16.30/31 16 10 FF Example 1 8 14 20 G(s) = (s + 8)(s + 14)(s + 20) Method #1: original implementation. Method #2: original, with the reference input scaled to ensure that the DC gain of y/r = 1. DC Method #3: new implementation with both G = BN and N selected. 1.4 1.2 Step Response meth1 meth2 meth3 1 Y response 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 Time (sec) New method (#3) shows less transient impact of applying r(t).

Fall 2010 16.30/31 16 11 FF Example 2 0.94 G(s) = s 2 0.0297 Method #1: original implementation. Method #2: original, with the reference input scaled to ensure that the DC gain of y/r = 1. DC Method #3: new implementation with both G = BN and N selected. 1.5 Step Response meth1 meth2 meth3 1 Y response 0.5 0 0 1 2 3 4 5 Time (sec)

Fall 2010 16.30/31 16 12 FF Example 3 8 14 20 G(s) = (s 8)(s 14)(s 20) Method #1: original implementation. Method #2: original, with the reference input scaled to ensure that the DC gain of y/r = 1. DC Method #3: new implementation with both G = BN and N selected. 4 3 Step Response meth1 meth2 meth3 2 Y response 1 0 1 2 3 0 0.2 0.4 0.6 0.8 1 Time (sec)

Fall 2010 16.30/31 16 13 FF Example 4 Revisit example on 15-9, with G(s) = 1/(s 2 + s + 1) and regulator target poles at 4 ± 4j and estimator target poles at 10. So dominant poles have ω n 5.5 and ζ = 0.707 From these closed-loop dynamics, would expect to see 1 + 1.1ζ + 1.4ζ 2 10-90% rise time t r = ω n = 0.44s Settling time (5%) t s = ζω 3 n = 0.75s Time to peak amplitude t p = π = 0.79s ω n 1 ζ 2 Peak overshoot M p = e ζω nt p = 0.04 Now compare step responses of new and original implementations 1.4 1.2 Original New 1 0.8 Y 0.6 0.4 0.2 0 0 0.5 1 1.5 2 2.5 Time (s) Hopefully it is clear that the new implementation meets the expected criteria quite well - and certainly much better than the original

Fall 2010 16.30/31 16 14 Code: Dynamic Output Feedback FF Examples 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 % Code for Topic 16 in 2010 % Examples of dynamic output feedback % Jonathan How % close all; % clear all;for model=1:5;dofb examp2;end fig=0; % system switch model case 1 G=tf(8*14*20,conv([1 8],conv([1 14],[1 20])));name='nexamp1'; tt=[ 20 0 10 10]*2.5; t=[0:.0025:1]; case 2 G=tf(8*14*20,conv([1 8],conv([1 14],[1 20])));name='nexamp2'; tt=[ 10 10 10 10]*2.5; t=[0:.0025:1]; case 3 G=tf(.94,[1 0 0.0297]);name='nexamp3'; tt=[ 10 10 10 10]/5; t=[0:.01:5]; case 4 G=tf([1 1],conv([1 1],[1 3]));name='nexamp4'; tt=[ 10 10 10 10]; t=[0:.01:10]; case 5 G=tf(conv([1 2],[1 4]),conv(conv([1 1],[1 3]),[1 2*.2*2 2ˆ2 0 0])); name='examp5'; case 6 G=tf(8*14*20,conv([1 8],conv([1 14],[1 20])));name='nexamp1h'; tt=[ 20 0 10 10]*2.5; t=[0:.0001:1]; otherwise return end %%% [a,b,c,d]=ssdata(g);na=length(a); % if model == 6 R=.00001; else R=.01; end % choose the regulator poles using LQR 12 2 [k,p,reg poles]=lqr(a,b,c'*c,r); %hold on %plot(reg poles+j*eps,'md','markersize',12,'markerfacecolor','m') %hold off % design the estimator by doubling the real part of the regulator poles PP=2*real(reg poles)+imag(reg poles)*j; % see 14 13 ke=place(a',c',pp);l=ke'; % now form compensator see 14 4 ac=a b*k l*c;bc=l;cc=k;dc=0; % see 15 6 acl=[a b*cc;bc*c ac];ccl=[c,zeros(1,na)];dcl=0; % standard case bcl1=[zeros(na,1); bc]; %scale up the ref path so that at least the Gcl(s) has unity DC gain bcl2=[zeros(na,1); bc]; % original implementation on 16 24 scale=ccl*inv( acl)*bcl2; bcl2=bcl2/scale; % bcl3=[b;b]; % new implementation on 14 23 scale=ccl*inv( acl)*bcl3; bcl3=bcl3/scale; % Gcl1=ss(acl,bcl1,ccl,dcl); % original implementation (14 5) unscaled Gcl2=ss(acl,bcl2,ccl,dcl); % original implementation (14 5) scaled Gcl3=ss(acl,bcl3,ccl,dcl); % new implementation on (14 23) scaled Gc=ss(ac,bc,cc,dc); fig=fig+1;figure(fig);clf; f=logspace( 2,3,400);j=sqrt( 1);

Fall 2010 16.30/31 16 15 77 g=freqresp(g,f*j);g=squeeze(g); 78 gc=freqresp(gc,f*j);gc=squeeze(gc); 79 gcl1=freqresp(gcl1,f*j);gcl1=squeeze(gcl1); 80 gcl2=freqresp(gcl2,f*j);gcl2=squeeze(gcl2); 81 gcl3=freqresp(gcl3,f*j);gcl3=squeeze(gcl3); 82 83 figure(fig);fig=fig+1;clf 84 orient tall 85 subplot(211) 86 loglog(f,abs(g),f,abs(gc),' ','LineWidth',2);axis([.1 1e3 1e 2 500]) 87 xlabel('freq (rad/sec)');ylabel('mag') 88 legend('g','g c','location','southeast');grid 89 subplot(212) 90 semilogx(f,180/pi*unwrap(angle(g)),f,180/pi*unwrap(angle(gc)),' ','LineWidth',2); 91 axis([.1 1e3 200 50]) 92 xlabel('freq (rad/sec)');ylabel('phase (deg)');grid 93 legend('g','g c','location','southwest') 94 95 L=g.*gc; 96 97 figure(fig);fig=fig+1;clf 98 orient tall 99 subplot(211) 100 loglog(f,abs(l),[.1 1e3],[1 1],'LineWidth',2);axis([.1 1e3 1e 2 10]) 101 xlabel('freq (rad/sec)');ylabel('mag') 102 legend('loop L'); 103 grid 104 subplot(212) 105 semilogx(f,180/pi*phase(l.'),[.1 1e3], 180*[1 1],'LineWidth',2); 106 axis([.1 1e3 290 0]) 107 xlabel('freq (rad/sec)');ylabel('phase (deg)');grid 108 % 109 % loop dynamics L = G Gc 110 % see 15 6 111 al=[a b*cc;zeros(na) ac];bl=[zeros(na,1);bc];cl=[c zeros(1,na)];dl=0; 112 113 figure(fig);fig=fig+1;clf 114 margin(al,bl,cl,dl); 115 figure(fig);fig=fig+1;clf 116 rlocus(al,bl,cl,dl) 117 hold on; 118 plot(eig(acl)+eps*j,'bd','markerfacecolor','b') 119 plot(eig(a)+eps*j,'mv','markerfacecolor','m') 120 plot(eig(ac)+eps*j,'rs','markersize',9,'markerfacecolor','r') 121 plot(tzero(ac,bc,cc,dc)+eps*j,'ro','markerfacecolor','r') 122 hold off;grid on 123 % 124 % closed loop freq response 125 % 126 figure(15);clf 127 loglog(f,abs(g),f,abs(gcl1),f,abs(gcl2),' ','LineWidth',2); 128 axis([.1 1e3.01 1e2]) 129 xlabel('freq (rad/sec)');ylabel('mag') 130 legend('plant G','Gcl unscaled','gcl scaled');grid 131 132 figure(fig);fig=fig+1;clf 133 loglog(f,abs(g),f,abs(gcl1),f,abs(gcl2),f,abs(gcl3),' ','LineWidth',2); 134 axis([.1 1e3.01 1e2]) 135 xlabel('freq (rad/sec)');ylabel('mag') 136 legend('plant G','Gcl1','Gcl2','Gcl3');grid 137 138 ZZ= 1+.1*exp(j*[0:.01:1]*2*pi); 139 figure(fig);fig=fig+1;clf 140 semilogy(unwrap(angle(l))*180/pi,abs(l)) 141 hold on; 142 semilogy(unwrap(angle(zz))*180/pi 360,abs(ZZ),'g ') 143 hold off 144 axis([ 270 100.1 10]) 145 hold on;plot( 180,1,'rx');hold off 146 if max(real(eig(a))) > 0 147 title('nichols: Unstable Open loop System') 148 else 149 title('nichols: Stable Open loop System') 150 end 151 ylabel('mag');xlabel('phase (deg)') 152 153 figure(fig);fig=fig+1;clf

Fall 2010 16.30/31 16 16 154 plot(unwrap(angle(l))*180/pi,abs(l),'linewidth',1.5) 155 hold on 156 plot(unwrap(angle(l))*180/pi,.95*abs(l),'r.','linewidth',1.5) 157 plot(unwrap(angle(l))*180/pi,1.05*abs(l),'m:','linewidth',1.5) 158 plot(unwrap(angle(zz))*180/pi,abs(zz),'g ') 159 plot( 180,1,'rx');hold off 160 hold on; 161 semilogy(unwrap(angle(zz))*180/pi 360,abs(ZZ),'g ') 162 hold off 163 legend('1','0.95','1.05') 164 axis([ 195 165.5 1.5]) 165 if max(real(eig(a))) > 0 166 title('nichols: Unstable Open loop System') 167 else 168 title('nichols: Stable Open loop System') 169 end 170 ylabel('mag');xlabel('phase (deg)') 171 172 figure(fig);fig=fig+1;clf 173 loglog(f,abs(1./(1+l)),f,abs(l),' ','LineWidth',1.5);grid 174 title('sensitivity Plot') 175 legend(' S ',' L ') 176 xlabel('freq (rad/sec)');ylabel('mag') 177 axis([.1 1e3 1e 2 100]) 178 179 figure(fig);fig=fig+1;clf 180 [y1,t]=step(gcl1,t); 181 [y2,t]=step(gcl2,t); 182 [y3,t]=step(gcl3,t); 183 plot(t,y1,t,y2,t,y3,t,ones(size(t)),' ','LineWidth',1.5) 184 setlines(1.5) 185 legend('meth1','meth2','meth3') 186 title('step Response') 187 xlabel('time (sec)');ylabel('y response') 188 189 for ii=[1:gcf 15] 190 eval(['figure(',num2str(ii),'); export fig ',name,' ',num2str(ii),' 191 if ii==4; 192 figure(ii);axis(tt) 193 eval(['export fig ',name,' ',num2str(ii),'a pdf']) 194 end 195 end 196 197 eval(['save ',name,' R G Gc Gcl1 Gcl2 Gcl3 k l P PP']) 198 199 return pdf a1'])

MIT OpenCourseWare http://ocw.mit.edu 16.30 / 16.31 Feedback Control Systems Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.