Infinite Horizon LQ Given continuous-time state equation x = Ax + Bu Find the control function ut) to minimize J = 1 " # [ x T t)qxt) + u T t)rut)] dt 2 0 Q $ 0, R > 0 and symmetric Solution is obtained as the limiting case of Ricatti Eq. Summary of Continuous-Time LQ Solution Control law: u = "Kx, K = R "1 B T P Algebraic Ricatti Equation ARE) PA + A T P + Q " PBR "1 B T P = 0 J " = 1 2 x 0 ) T Px 0)
Example Find the optimal controller that minimizes u M=1 y J = 1 " # [ y 2 t) + ru 2 t)] 2 dt, r > 0 0 " x 1 = x 2 # % d & $ x 2 = u dt ' & Q = 1 0 ) +, R = r ' 0 0* x 1 x 2 ) & + = 0 1 )& + x 1) * ' 0 0* ' * + + & 0 ) ' 1 + u * x 2 Solving for P PA + A T P + Q " PBR "1 B T P = 0 " p 1 p 2 %" $ ' 0 1 % " $ ' + 0 0 %" $ ' p 1 p 2 % " $ ' + 1 0 % $ ' # p 2 p 3 &# 0 0& # 1 0& # p 2 p 3 & # 0 0& 1 " p 1 p 2 %" $ r # p 2 p 3 & ' 0 % " $ # 1 '[ 0 1] p 1 p 2 % $ ' = 0 & # p 2 p 3 & Gives three simultaneous equations: # 2 1" p 2 r = 0 % # $ p 1 " p 2 p 3 r = 0 ' p 2 = r 1 2, p 3 = 2r 3 4 $ % 2 & p 1 = 2r 1 4 & 2p 2 " p 3 r = 0
Control Gain and Closed-Loop Poles Control law: u = "Kx, K = R "1 B T P K = " 1 [ r 0 1 # ] p 1 p 2 & % = " 1 $ p 2 p 3 ' r p 2 p 3 K = " r "1 2 2r "1 4 [ ] [ ] Closed-loop Characteristic Eq. si " A " BK = s 2 + 2r "1 4 s + r "1 2 = 0 Natural frequency and damping ratio " n = r #1 4,$ = 2 2 = 0.707 Comments on the LQ Solution of Example As r!0, natural frequency! infinity and settling time! 0 fast system) As r!infinity, natural frequency! 0 and settling time! infinity slow system) Damping ratio is always 0.707 Effect on ut)?
Optimal Estimation Observer design based on poleplacement is nonunique and does not consider noise effects Optimal observer dynamics are difficult to determine Estimators can be optimized with respect to input and measurement noise properties Discrete Random Variables Discrete random variable x has discrete values x i, i=1,,n each with certain occurrence probability p i Expected Value of x: x = Ex) = x i p i Covariance of x: N [ ] = # x i " x P = E x " x )x " x ) T i=1 N " i=1 ) x i " x ) T p i
Dice Example x=dots on rolled dice p 0 6/36 5/36 4/36 3/36 2/36 1/36 2 3 4 5 6 7 8 9 10 11 12 x 2 3 4 5 6 7 8 9 10 11 12 x = 2. 1 36 + 3. 2 36 +L+11. 2 36 +12. 1 36 = 7 P = 2 " 7) 2. 1 36 +L+ 12 " 7)2. 1 36 = 5.83 Continuous Random Variables A continuous random variable x has continuous values from -" to +" according to certain probability density function pdf) px): px) x Basic Properties of pdf: px) " 0 +$ % px)dx =1 #$
Interpretation of pdf pa) dx is the probability that x lies in interval # a " dx $ % 2,a + dx 2 & ' a dx x Expected value of x: x = Ex) = xpx)dx Covariance of x: P = E x " x ) 2 +# $ "# [ ] = +# $ x " x )2 px)dx "# Uniform pdf Example $ & 1 px) = c, b " c 2 # x # b " c % 2 '& 0, otherwise px) } c 1/c b x Ex) = E x " b) 2 +# $ xpx) dx = "# b+c 2 [ ] = 1 b+c 2 1 $ b"c 2 c x dx = b $ x " b)2dx = c2 c 12 b"c 2
Gaussian pdf Example px) = % 1 " 2# exp ' $ x $ x &' 2" 2 ) 2 * )* px) x # x It can be shown that Ex) = x E [x " x ) 2 ] = # 2 Continuous Random Vectors For x=[x 1 x 2 x n ], pdf px) is function of x 1,, x n : Interpretation: pa) dx 1,,dx n is the probability that x lies in the differential volume dx 1,,dx n centered at x=a. +# +# $ L $ px)dx 1 Ldx n =1 "# "# a dx 1 dx 2
Expected Value and Covariance of a Random Vector Expected value of x: Covariance of x: P = E x " x )x " x ) T +# +# x = Ex) = $ L $ xpx)dx 1 Ldx n "# "# [ ] = L +# $ x " x )x " x )T px)dx 1 +# $ Ldx n "# "# % x 1 " x 1 ) 2 L x 1 " x 1 )x n " x n ) +# +# ' * P = $ L $ ' M * px)dx 1 Ldx n "# "#' & x 1 " x 1 )x n " x n ) L x n " x n ) 2 * ) Covariance P is symmetric and semi positive definite Gaussian pdf px) = 1 P 1 2 2" It can be shown that $ ) n 2 exp # 1 2 x # x ) T P #1 x # x ) % & Ex) = x, E x " x [ ) x " x ) T ] = P ' )
Maximum Likelihood Estimation MLE) Consider the static stochastic process: x A w y = Ax + x,w: uncorrelated Gaussian random variables Problem: Find the most likely estimate for x given observation y Expected Values and Covariances: E[x] = x, Ew) = w E[x " x )x " x ) T ] = P xx, E[w " w )w " w ) T ] = P ww E[x " x )w " w ) T ] = 0 w Least Squares Equivalent of MLE Joint pdf: px,w) = 1 ) P xx 1 2 Pww 1 2 2" exp #J n ) LQ Index J = 1 2 x T P xx "1 x + 1 2 w T P ww "1 w Process Constraint: where y = A x + w y = y " y, x = x " x, w = w " w MLE minimizes J subject to process constraint
Minimize J MLE Solution "J " x = x T #1 P xx + w T #1 " w Pww " x = 0, w = y # A x Solving for optimal x, denoted by xˆ, gives x ˆ = x + P ˆ xx A T "1 P ww y " Ax " w ) ˆ P xx "1 = Pxx "1 + A T Pww "1 A Comments on MLE Solution Large P ww relative to P xx indicates lack of confidence in measurement y. This results in x close to x. The covariance of x given y denoted x y) E[x y " x ˆ )x y " x ˆ ) T ] = P ˆ xx P ˆ "1 xx = "1 Pxx + A T "1 Pww A This indicates a reduction in the initial uncertainty in x
where Parameter Identification Process: y k = " k T # + w k y k : Output at time k scalar) $: Constant parameter vector unknown) % k : Regression vector known) w k : Gaussian noise with zero mean Find most likely estimate of $ given y 0,y 1,,y k denoted by!ˆ k RLS Version of MLE The MLE can be made recursive by using Current parameter estimate as the initial estimate in the MLE Current covariant estimate as the covariant of the initial estimate in the MLE And continuing the process recursively
Recursive Least Squares Exchanging the following variables in MLE x " ˆ # k$1, x ˆ " ˆ # k, P xx " P k$1, P ˆ xx " P k, P ww "1 normalized) ˆ " k = " ˆ T k#1 + P k $ k y k #$ ˆ k " k#1 ) P k #1 = Pk#1 #1 + $k $ k T Efficient RLS Algorithm ˆ " k = " ˆ k#1 + P k#1 $ k 1+ $ k T Pk#1 $ k y k #$ k T ˆ " k#1 ) P k = P k#1 # P k#1$ k $ k T Pk#1 1+ $ k T Pk#1 $ k P 0 = E % " # ˆ & ' " 0 ) " #" ˆ 0 ) T RLS changes $ k proportional to previous estimation error y k - % kt $ k-1 ) and P k-1 % k. ) *
Parameter Convergence At every iteration P k & P k-1, i.e., less uncertainty in parameter estimation If the regression vector is such that lim k"# $ min ' ) # & k=0 % k % k T *, = # + then lim P k = 0 and lim k"# k"# ˆ $ k = $ System Identification Consider the Input-Output Model of a LTI Discrete system n yk) = # a i yk " i) + # b i uk " i) + wk) i=1 n i=1 Defining, y k = yk), w k = wk) noise [ ] [ ] " k = yk #1) L yk # n) uk #1) L uk # n) $ = a 1 L a n b 1 L b n Then y k = " k T # + w k RLS can be used to estimate unknown parameters
First Order Example Consider the first order system yk) = ayk "1) + buk "1) + wk) Defining, y k = yk), w k = wk) $ yk #1)' " k = & % uk #1) ),* = $ a ' & % b ) noise Then y k = " k T # + w k RLS can be used to estimate unknown parameters a and b Matlab Solution: Input- Output Data % Parameter Identification Using RLS % Continuous-Time plant Gs)=1/s+1); numc=[0 1]; denc=[1 1]; Ts=0.1; [num,den]=c2dmnumc,denc,ts,'zoh'); % Generate Input/Output Signal K=[0:500]'; imax=lengthk); U=sin2*pi*K/10); Y=dlsimnum,den,U); %Add Measurement Noise ymax=maxabsy)); Noise=normrnd0,0.5*ymax,sizeY)); Yn=Y+Noise;
Matlab Version of RLS %Initialize Least Squares Variables phi=zeros2,1); P=1000*eye2,2); thh=zeros2,1); THH=[]; for ii=1:imax y=ynii); [thhn,pn]=rlsthh,p,y,phi); thh=thhn; P=Pn; THH=[THH; ii thh' y-phi'*thh]; phi=[ynii);uii)]; end RLS Update Function function [thn,pn]=rlsth,p,y,phi); % This function updates the estimated % parameter and the covariant matrix %using the Recursive Least Squares % RLS) update algorithm. e=y-phi'*th;; psi=p*phi; g=1/1+phi'*psi); thn=th+g*psi*e; Pn=P-g*psi*psi');
1 Input-Output Data U Y 0.5 Amplitude 0-0.5-1 0 20 40 60 80 100 K 0.3 0.2 Output With Noise Y Y + 10%Noise Amplitude 0.1 0-0.1-0.2 0 20 40 60 80 100 K
Identification Results 1 Amplitude 0.8 0.6 0.4 0.2 Theta 1) Theta 2) 0-0.2 0 20 40 60 80 100 K Output With Noise 0.3 0.2 Y Y + 20% Noise Amplitude 0.1 0-0.1-0.2-0.3 0 20 40 60 80 100 K
Identification Results 1 0.8 0.6 Amplitude 0.4 0.2 0-0.2-0.4 0 20 40 60 80 100 K