Lecture Note #7 (Chap.11)

Size: px
Start display at page:

Download "Lecture Note #7 (Chap.11)"

Transcription

1 System Modeling and Identification Lecture Note #7 (Chap.) CBE 702 Korea University Prof. Dae Ryoo Yang

2 Chap. Real-time Identification Real-time identification Supervision and tracing of time varying parameters for Adaptive control, filtering, prediction Signal processing Detection, diagnosis artificial neural networs, etc. Identification methods based on set of measurements are not suitable Only few data needed to be stored Drawbacs Requires a priori nowledge on model structure Iterative solutions based on larger data sets may be difficult to organize

3 Recursive estimation of a constant Consider the following noisy observation of a constant parameter 2 y = φθ + v, Ev { } = 0, Evv { } = σδ ( φ =, ) i j ij he least-squares estimate is found as the sample average θ = (/ ) i = y i Recursive form θ = i = y i θ = θ+ (/ )( y θ ) ( ) θ y = i= Variance estimate of the least-squares estimate p ( ) i= Note that p 0 as i 2 = σ φφ i i E θ θ θ θ 2 p = + 2 = 2 σ σ + p p p p {( )( )} σ

4 Derivation of recursive least-squares identification Consider as usual the regressor φ i and the observation y i Φ = φ φ Y = y y [ ] [ ] he least-squares criterion based on samples is V( θ) = (/2)( Y Φθ) ( Y Φ θ) = (/2) θ ( ) θ ( ) he ordinary least-squares estimate = Φ Φ Φ = θ ( ) Y ( φφ ) ( ) i i φ i i iy = = i Introduce the matrix = ( ) ( ) i i = Φ i= Φ P φφ P = P + φφ ( P 0 = 0) P = iyi + y = P P i + y = + P y = θ ( φ φ ) ( θ φ ) θ φ ( φθ ) Alternative form (avoiding inversion of matrices) P = ( Φ Φ ) = ( Φ Φ + φφ ) = ( P + φφ ) = P P φ ( I + φ P φ ) φ P cf) ( A BC) A A BI ( CA B) CA ( P = αi, with α ) + = + Matrix inversion lemma 0

5 Recursive Least-Squares (RLS) Identification he recursive least-squares (RLS) identification algorithm θ = θ + Pφ θ : the parameter estimate = y φθ : the prediction error P φφ P P = P, P given 2 : the parameter covariance estimate except σ + 0 φ P φ Some properties of RLS estimation Parameter accuracy and convergence Q( θ ) = (/2)( θ θ ) P ( θ θ) = (/2) θ P θ 2( Q( θ ) Q( θ )) = θ P θ θ P θ θ = θ + Pφ P P = φφ P = θ ( P P ) θ + 2θ φ + φ Pφ 2 = ( θ φ + ) + ( + φ P φ ) 2 2 = ( θ φ + ) φ P φ φ P φφ P φ P = P + φ P φ φ φ φ φ

6 Under the linear model assumption, If v =0 for all, Q decreases in each recursion step If Q tends to zero, it implies that tends to zero as the sequence of 2 weighting matrix P is an increasing sequence of positive definite matrix where P P for all >0. heorem. Q( θ ) Q( θ ) = v φ P φ y = φθ + v so that = θ φ + v he errors of estimated parameters and the prediction error for leastsquares estimation have a bound determined by the noise magnitude according to V( θ) + Q( θ) = ( θ) θ ( ) + θ P θ = vv It implies 2 θ = vv ( θ) θ ( ) θ ΦΦ θ P he parameter convergence can be obtained for a stationary stochastic process {v } if ΦΦ >ci p p. (c is a constant) hus, poor convergence is obtained in cases of large disturbance and a ran-deficient ΦΦmatrix. θ

7 Properties of P matrix P is a positive definite and symmetric matrix. (P =P >0) P 0 as he matrix P is asymptotically proportional to the parameter estimate covariance provided that a correct model structure has been used. It is often called the covariance matrix. Comparison between RLS and offline LS identifications If initial value P 0 and θ 0 can be chosen to be compatible with the results of the ordinary least-squares method, the result obtained from RLS is same as that of offline least-squares identification. hus, calculate the initial values for RLS from the ordinary LS method using some bloc of initial data.

8 Modification for time-varying parameters he RLS gives equal weighting to old data and new data. If the parameters are time-varying, pay less attention to old data. Forgetting factor (λ) i 2 J( θ) = λ ( y ) (0 ) i φθ i < λ = 2 Modified RLS θ = θ + P φ = y φθ P ( φφ P P = P ), P0 given λ λ + φ P φ Disadvantages he noise sensitivity becomes more prominent as λ decreases. P matrix may increase as grows if the input is such that the magnitude of P - φ is small (P-matrix explosion or covariance matrix explosion)

9 Choice of forgetting factor rade-off between the required ability to trac a time-varying parameter (i.e., a small value of λ) and the noise sensitivity allowed A value of λ close to : less sensitive to disturbance but slow tracing of rapid variations in parameters Default choice: 0.97 λ Rough estimate of data points in memory (time constant): /( λ) Example.3: Choice of forgetting factor y = φθ+ v, Ev Evv 2 { } = 0, { i j} = σδij ( φ =, ) θ = θ + P φ = y φθ P φφ P P = ( P ), P = 0 λ λ+ φ P φ θ = 2 λ = 0.99, 0.98, 0.95 Faster tracing

10 Delta model he disadvantage of z-transformation is that the z-transformation parameters do not converge to the Laplace transformation continuous parameters, from which they were derived, as sampling period decreases. Very small sampling periods yield the very small numbers from the transfer function numerator. he poles of transfer function approach the unstable domain as the sampling period decreases. hese disadvantages can be avoided by introducing a more suitable discrete model. δ-operator: δ ( z)/ h x + =Φ x +Γ u δ x =Φ x +Γ u = (/ h)( Φ I) x + (/ h) Γu y = Cx y = Cx his formulation maes the state-space realization and the corresponding system identification less error-prone due to favorable numerical scaling properties of the Φ and Γ matrices as compared to the ordinary z- transform based algebra.

11 Kalman filter interpretation Assume that the time-varying system parameter θ may be described by the state-space equation θ+ = θ + v, Ev { i} = 0, Evv { i j} = Rδij, i, j y = φθ + e, Ee { i} = 0, Eee { i j} = R2δij, i, j Kalman filter for estimation of θ θ = θ + K K = P φ /( R + φ P φ ) 2 = y φθ P φφ P P = P + R R2 + φ P φ Differences from RLS θ = θ + P φ = y φθ P φφ P P ( P ), P given = 0 λ λ + φ P φ he dynamic of P changes from exponential growth to linear growth rate for φ =0 due to R. P of the Kalman filter does not approach zero as for a nonzero sequence {φ }. (RLS)

12 Other forms of RLS algorithm Basic version: - Normalized gain version θ = θ + K [ y φθ ] K P = Pφ = P λ + φ P φ P ( = P ) λ λ + φ P φ φ φφ Multivariable case: P P = R and R = γ R θ = θ + γ R φ [ y φθ ] R = R + γ ( φφ R ) γ λ = + γ θ λ φθ φθ = argmin j [ y ] Λ [ y ] θ 2 i= j=+ i θ = θ + K [ y φθ ] ( ) K = P φ λ Λ + φ P φ ( = φ ( λλ + φ φ) φ ) λ P P P P P Λ =Λ + γ ( Λ ) if λ = λ, j, λ = λ j j=+ i j i Output error covariance (Prediction gain update) (Parameter error covariance update) (Output error covariance update)

13 Recursive Instrumental Variable (RIV) Method he ordinary IV solution RIV θ θ = θ + K K = P z /( + φ P z ) P z ( Z ) Z Y ( z ) ( ) i iφ zy = i i= i i = Φ = = y φθ P z φ P = P + φ P z Standard choice of instrumental variable: z = ( x x u u ) n n he variable x may be, for instance, the estimated output. RIV has some stability problem associated with the choice of IV and the updating of P matrix. A B

14 Recursive Prediction Error Methods (RPEM) RPEM Consider a weighted quadratic prediction error criterion i 2 i J( θ) = γ λ ( y ) ( ) i φθ i γλ = i= = 2 i J ( θ) =γ λ ψ ( ψ / θ) i= i i i [( / ) J ( ) ] ( θ) γ [ ψ J ( θ) ] = γ λ γ θ ψ = J + General RPEM search algorithm 0 0 J ( θ) = J ( θ) + γ ψ( θ) ( θ) J( θ) ( θ: optimal) θ = θ R J ( θ ) = θ + γ R ψ R = R + γ ( φφ R )

15 Stochastic gradient methods A family of RPEM Also, called stochastic approximation or least mean square (LMS) Uses steepest descent method to update the parameters ψ / θ = ( φθ y )/ θ = φ for linear model y = φθ he algorithm: (ime-varying regressor-dependent gain version) θ = θ + γφ γ = y φθ = Qφ / r ( Q = Q > 0) r = r + φ Q φ Rapid computation as there is no P matrix to evaluate Good detection of time-varying parameters Slow convergence and noise sensitivity Modification for time varying parameters r λr φ Q φ 0 λ Keeping the factor r at a lower magnitude

16 RPEM for multivariable case θ = θ + γ R ψ Λ ( θ ) ( ) R = R + γ ψ Λ ψ R Λ =Λ + γ ( Λ ) Projection of parameters into parameter domain D M θ = θ + K [ y φθ ] θ θ if θ D = θ if θ D M M

17 Recursive Pseudolinear Regression (RPLR) Recursive pseudolinear regression (RPLR) Also, called Recursive ML estimation, Extended LS method he regression model: he recursive algorithm θ = θ + K K = P φ /( + φ P φ ) P = y φθ P φφ P = P + φ P φ he regression vector: y = φθ + v θ = ( a an b b ) A n c c B nc θ = ( y y u u ) n n n he algorithm may be modified to iterate for the best possible. Estimate of v A B C

18 Application to Models RPEM to state-space innovation model Predictor Algorithm = y y x + ( θ) = F( θ) x ( θ) + G( θ) u + K( θ) v y = H( θ) x( θ) Λ =Λ + γ Λ R = R + γ ψ Λ ψ R θ = θ + γ R ψ Λ x = Fx + Gu + K y + H x + + W = ( F KH ) W + M KD ψ = + = W H + D ( θ ) where F = F( θ ) G = G( θ ) H = H( θ ) K = K( θ ) d ψ( θ) y ( θ) dθ d W( θ) x( θ) dθ Innovation v = y y ( θ ) D( θ) H( θ) x( θ) θ M F x G u K θ [ ( θ) + ( θ) + ( θ ) ] θ,,, x u

19 RPEM to general input-output models System: Bq ( ) Cq ( ) Aq ( ) y = u + e Fq ( ) Dq ( ) Predictor Dq ( ) Aq ( ) Dq ( ) Bq ( ) y ( θ ) = y u Cq ( ) + Cq ( ) Fq ( ) Error definitions Dq ( ) Bq ( ) ( ) Dq Cq ( ) Fq ( ) Cq ( ) Aq ( ) = + aq + + a q Bq ( ) = bq + + b q ( ) = y y( ) = Aq ( ) y u = v θ θ w ( θ ) = Bq Fq ( ) ( ) u v Aq y w ( θ) = ( ) ( θ) Fq ( ) = + fq + + f q Cq ( ) = + cq + + c q Dq ( ) = + dq + + d q Parameter vector θ = a an b a bn f b fn c f cn d c d nd Regressor ϕ θ = y y u u w w v v ( ) na nb nf nc nd n b n c a n n n n b f d n c a n n f n d

20 Error calculations w ( θ) = bu + + b u fw ( θ) f w ( θ) n n n n Expression for prediction error ( θ) = v ( θ) + dv ( θ) + d v ( θ) c ( θ) c ( θ) = y θ ϕ( θ) Gradient expressions b b f f v ( θ) = y + ay + + a y w ( θ) n n a a ( θ) = v ( θ) + dv ( θ) + d v ( θ) c ( θ) c ( θ) n n n n d d c c n n n n = y + ay + + a y bu b u + fw ( θ) + + f w ( θ) n n n n n n c ( θ) c ( θ) + dv ( θ) + + d v ( θ) = y y n n n n ( θ ) Cq ( ) Fq ( ) y( θ ) = Fq ( ) Cq ( ) Dq ( ) Aq ( ) y ψ + Dq ( ) Bq ( ) u y ( θ ) ( θ) = θ d d c c a a b b f f c c d d y ( θ ) Dq ( ) y i a i Cq ( ) y ( θ ) Dq ( ) u i b i Cq ( ) Fq ( ) y ( ) θ ( ) Dq ( ) ψ θ = = w ( ) i θ f i Cq ( ) Fq ( ) y ( θ ) ( ) i θ c i Cq ( ) y ( θ ) v ( ) i θ di Cq ( )

21 Algorithm ϕ y ψ n n n n b b f f n n a + n + n + n + = y y R = R + γ ψψ R θ = θ + γ R ψ w = bu + + b u fw f w v = y + ay + + a y w n + n + c a = y y u u w w v a b f v v = + n n n n + + n n n n n n n n n n n n = θ ϕ dv + + d v c c d d d c c y = y + dy + + d y cy c y d d c c u = u + du + + d u gu g u d d g g w = w + dw + + d w gw g w = c c n n v = v cv c v n n c c d d g g + n + n + n + n + n + c c c = y y u u w w a b f v g C q F q i = coefficients of ( ) ( ) v d

22 Extended Kalman Filter Kalman filter for nonlinear state-space model System: x F ( x, ) G( ) u w ( Eww { } R) + = θ + θ + i j = δij y = H ( x, θ) + e ( Eee { } = δ R, Ewe { } = δ R ) i j ij 2 i j ij 2 With extended state vector: x X X = = F( X ) + G( θ ) u + + w θ y = H( X) + e where F ( x, θ) G w F( X) = G = w = H( X) = H ( x, θ) θ 0 0 Linearization F df( X, u) dh( X, u) = H = dx dx X= X X= X

23 Algorithm Given x, θ, P (start from =0) K = [ FPH + R ][ HPH + R ] 2 2 X = FX + G( θ ) u + K [ y H X ] + P = FPF + R K [ HPH + R ] K + 2 o avoid the calculation of large matrices, partition the matrices Use of latest available measurements to update the parameter estimates K = [ FPH + R ][ HPH + R ] 2 2 X = X + K [ y H X ] X = FX + G( θ ) u + df( X, u) dh( X, u) F( X ) = H( X ) = dx dx X= X X= X P = FPF + R K [ HPH + R ] K + 2

24 Subspace Methods for Estimating State-Space Models Estimation of system matrices, A, B, C, and D offline x + = Ax + Bu + w x R u R n, p y = Cx + Du + v y R Assuming minimal realization If the estimates of A and C are nown, estimates of B and D can be obtained using linear least-squares method. y = CqI ( A) Bu + Du + v or y = CqI ( A) x δ + CqI ( A) Bu + Du + v 0 he estimates of B and D will converge to true values if A and C are exactly nown or at least consistent. C CA Or = r CA If the (extended) observability matrix (O r ) is nown, then A and C can be estimated. (r>n) m

25 For linear transformation, For now system order (n * =n) G = O pr n C = O p n r * ( ) r(:,: ) For unnown system order (n * >n) G = O pr n G = USV r O ( p+ : pr,: n) = O (: pr ( ),: na ) Partition the matrices depending on the singular values and neglect the portion for smaller singular values he estimate of the observability matrix can be O = US or O = U, etc. O = UR ( R is invertible) Obtain estimates of A and C from the estimated observability matrix Noisy estimate of the extended observability matrix G = USV = USV + (other terms) = O r + EN If O r explains the system well, the E N stems from noise. If E N is small, the estimate is consistent r x = x O = O r r * ( ) (SV D) G = USV G = USV = O r OV r = US = O r r r r r

26 Using weighting matrices in the SVD For flexibility, pretreatment before SVD can be applied G = WGW = USV USV hen the estimate of extended observability matrix becomes = W UR Or 2 When the noise is present, W has important influence on space spanned by U and hence on the quality of the estimate of A and C. Estimating the extended observability matrix he basic expression y = Cx + Du + v + i + i + i + i = CAx + CBu + Cw + Du + v + i + i + i + i + i = CA x + CA Bu + CA Bu + + CBu + Du i i i2 + + i + i + CA w + CA w + + Cw + v i i2 + + i + i

27 Define vectors y u D y r u CB D r + Y = U = Sr = r2 r3 y+ r u+ r CA B CA B CB D r r hen Y = Ox r + SU r + V Introduce r r r Y = [ Y Y Y ] X = [ x x x ] hen 2 N 2 U = [ U U U ] V = [ V x V ] Y= O X+ S U+ V r r r r 2 N 2 r o remove U-term, use Π = IU ( UU ) U U YΠ = XΠ + VΠ since UΠ = U UU ( UU ) U = 0 O U r U U U Choose a matrix Φ/Ν so that the effect of noise vanishes G = YΠ Φ = O XΠ Φ + VΠ Φ O + V U U U N N N limvn = lim VΠ Φ = 0 N N U N limn = lim XΠ Φ = ( has fuul ran n) N N U N r r N N N N (projection orthogonal to U)

28 Finding good instrument s s s Let F = [ ϕ ϕ2 ϕ N ] N N N N VΠ Φ = V ϕ V U U U U N N N = = N U = = From the law of large numbers lim N hus, choose ϕ s so that they are uncorrelated with V. ypical choice is s r r r r s ( ) ( ) ( ) ( ϕ ) { ( ϕ ) } ( ) r r ( ) 0 { } { ( ) } ϕ VΠ Φ = E V E V U R E U U N where R E U U s r r s u { } = (If V and U are independent) u ϕ s y y s = u u s 2

29 Finding the states and Estimating the noise statistics r-step ahead predictor Y =Θ ϕ +Γ U + E Y =Θ F +Γ U+ E r s r Least-squares estimate of parameters [ ] ΦΦ ΦU Θ Γ = ( YΦ YU Θ= Π Φ ΦΠ Φ ) Y U U UΦ UU Predicted output r r Y = ( Y Y N = YΠ Φ ΦΠ Φ ) Φ U U SVD and deleting small singular values Y USV = OR SV = O X r Alternatively X= LY = x x where L= R U [ ] N Noise characteristics can be calculated from w = x Ax Bu v = y Cx Du + r ( X = R SV, Or = UR ) ( X = R U USV = R U Y)

30 Subspace identification algorithm. From the input-output data, form G = YΠ Φ N 2. Select weighting matrix W and W 2 and perform SVD G = WGW = USV USV MOESP: N4SID: IVM: CVA: 2 /2 /2 W = ((/ N) YΠ Y ), W2 = ((/ N) ΦΠ Φ ) U U 3. Select a full ran matrix R and define O. hen solve for C and A. (ypical choices are R=I, R=S, or R=S /2 r = W UR ) C = O (: p,: n) r W = I, W = ((/ N) ΦΠ Φ ) ΦΠ W I W N W N W N 2 U U =, 2 = ((/ ) ΦΠ Φ ) Φ U /2 /2 = ((/ ) YΠ Y ), 2 = ((/ ) ΦΦ ) U Or( p+ : pr,: n) = Or(: pr ( ),: na ) 4. Estimate B and D and x 0 from the linear regression problem N 2 argmin y CqI ( A) Bu Du CqI ( A) x0δ BDx,, 0 N = 5. If a noise model is sought, calculate X and estimate the noise contributions. U

31 Nonlinear System Identification General nonlinear systems xt () = f ( xt ()) + gxt ( (), ut ()) + vt () yt () = hxt ( (), ut ()) Discrete-time nonlinear models Hammerstein models y = Bz Az ( ) ( ) Fu ( ) x = f ( x, u ) + + v y = hx (, u ) + w Wiener models Bz ( ) y = F( u ) Az ( )

32 Wiener Models Nonlinear aspects are approximated by Laguerre and Hermite series expansions i 2 Lauerre operators: sτ a z a Li () s = Li ( z) = + sτ + sτ az az Hermite polynomial: i 2 ( 2 i x d x H ) i ( x) = ( ) e e i dx Dynamics is approximated by Laguerre filter and the static nonlinearity is approximated by the Hermite polynomial. x = L( zu ) i i 2 n = ii ( ) ( ) ( ) 2 in i i2 in i = 0 i = 0 i = 0 y c H x H x H x 2 n c yh x H x H x N 2 n ii ( ) ( ) ( ) 2 i = n i i2 in N = i = 0 i = 0 i = 0 2 n i

33 Volterra-Wiener Models Volterra series expansion L L L L L L = + i i + i i i i i i2 i i = 0 i = 0 i = 0 i = 0 i = 0 i = 0 y h hu hu u hu u u Volterra Kernel 2 2 n-dimensional weighting function, (L is same as the model horizon in MPC) Limitations of Volterra-Wiener models Difficult to extend to systems with feedbac Difficult to relate the estimated Volterra ernels to a priori information i h (multidimensional impulse response coefficients)

34 Power Series Expansions Example: ime-domain identification of nonlinear system System: x = ax + b xu + bu 0 ( ) Unnown parameters: θ = a b b0 Collection of data: x x0 xu 0 0 u0 a = b x N xn xn un u N b 0 ( ) Least-squares solution: θ = Φ Φ Φ Properties of this approach Standard statistical validation tests are applicable without extensive modifications Frequency domain approach is also possible Y

35 Continuous-time version System: Unnown parameters Solving ODE: Collection of data: x =ax b xu+ b u Least-squares solution: 0 θ = ( a b b ) 0 t t t t xdt =a xdt b xudt+ b udt t t t t Power-series expansion of general nonlinear system Similarly, he parameters can be obtained from the least-squares method t t t x x xdt xdt xdt 0 t 0 t 0 t a 0 = b tn tn t x N N x N xdt xudt udt b 0 t t t θ ( ) = Φ Φ Φ m i i j ij i= i= j= N N N Y xt () = ax () t bx () tu () t + vt ()

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations System Modeling and Identification Lecture ote #3 (Chap.6) CHBE 70 Korea University Prof. Dae Ryoo Yang Model structure ime series Multivariable time series x [ ] x x xm Multidimensional time series (temporal+spatial)

More information

Lecture Note #6 (Chap.10)

Lecture Note #6 (Chap.10) System Modeling and Identification Lecture Note #6 (Chap.) CBE 7 Korea University Prof. Dae Ryoo Yang Chap. Model Approximation Model approximation Simplification, approximation and order reduction of

More information

Lecture Note #6 (Chap.10)

Lecture Note #6 (Chap.10) System Modeling and Identification Lecture Note #6 (Chap.) CBE 7 Korea University Prof. Dae Ryook Yang Chap. Model Approximation Model approximation Simplification, approximation and order reduction of

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Recursive Identification Algorithms Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2012 Guy Dumont (UBC EECE) EECE 574 -

More information

EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance

More information

2 Introduction of Discrete-Time Systems

2 Introduction of Discrete-Time Systems 2 Introduction of Discrete-Time Systems This chapter concerns an important subclass of discrete-time systems, which are the linear and time-invariant systems excited by Gaussian distributed stochastic

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Lecture 19 Observability and state estimation

Lecture 19 Observability and state estimation EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen CONROL SYSEMS, ROBOICS, AND AUOMAION - Vol. V - Recursive Algorithms - Han-Fu Chen RECURSIVE ALGORIHMS Han-Fu Chen Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy

More information

Hammerstein System Identification by a Semi-Parametric Method

Hammerstein System Identification by a Semi-Parametric Method Hammerstein System Identification by a Semi-arametric ethod Grzegorz zy # # Institute of Engineering Cybernetics, Wroclaw University of echnology, ul. Janiszewsiego 11/17, 5-372 Wroclaw, oland, grmz@ict.pwr.wroc.pl

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

State estimation and the Kalman filter

State estimation and the Kalman filter State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35

More information

12. Prediction Error Methods (PEM)

12. Prediction Error Methods (PEM) 12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter

More information

Subspace Identification Methods

Subspace Identification Methods Czech Technical University in Prague Faculty of Electrical Engineering Department of Control Engineering Subspace Identification Methods Technical Report Ing. Pavel Trnka Supervisor: Prof. Ing. Vladimír

More information

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (

More information

Identification of Nonlinear Systems Using Neural Networks and Polynomial Models

Identification of Nonlinear Systems Using Neural Networks and Polynomial Models A.Janczak Identification of Nonlinear Systems Using Neural Networks and Polynomial Models A Block-Oriented Approach With 79 Figures and 22 Tables Springer Contents Symbols and notation XI 1 Introduction

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. EE 537 Homewors Friedland Text Updated: Wednesday November 8 Some homewor assignments refer to Friedland s text For full credit show all wor. Some problems require hand calculations. In those cases do

More information

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical

More information

Chapter 4. The First Fundamental Form (Induced Metric)

Chapter 4. The First Fundamental Form (Induced Metric) Chapter 4. The First Fundamental Form (Induced Metric) We begin with some definitions from linear algebra. Def. Let V be a vector space (over IR). A bilinear form on V is a map of the form B : V V IR which

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

Hyperbolic Geometry on Geometric Surfaces

Hyperbolic Geometry on Geometric Surfaces Mathematics Seminar, 15 September 2010 Outline Introduction Hyperbolic geometry Abstract surfaces The hemisphere model as a geometric surface The Poincaré disk model as a geometric surface Conclusion Introduction

More information

Closed and Open Loop Subspace System Identification of the Kalman Filter

Closed and Open Loop Subspace System Identification of the Kalman Filter Modeling, Identification and Control, Vol 30, No 2, 2009, pp 71 86, ISSN 1890 1328 Closed and Open Loop Subspace System Identification of the Kalman Filter David Di Ruscio Telemark University College,

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

Adaptive Channel Modeling for MIMO Wireless Communications

Adaptive Channel Modeling for MIMO Wireless Communications Adaptive Channel Modeling for MIMO Wireless Communications Chengjin Zhang Department of Electrical and Computer Engineering University of California, San Diego San Diego, CA 99- Email: zhangc@ucsdedu Robert

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

Adaptive Robust Control

Adaptive Robust Control Adaptive Robust Control Adaptive control: modifies the control law to cope with the fact that the system and environment are uncertain. Robust control: sacrifices performance by guaranteeing stability

More information

CBE507 LECTURE IV Multivariable and Optimal Control. Professor Dae Ryook Yang

CBE507 LECTURE IV Multivariable and Optimal Control. Professor Dae Ryook Yang CBE507 LECURE IV Multivariable and Optimal Control Professor Dae Ryook Yang Fall 03 Dept. of Chemical and Biological Engineering Korea University Korea University IV - Decoupling Handling MIMO processes

More information

A Tutorial on Recursive methods in Linear Least Squares Problems

A Tutorial on Recursive methods in Linear Least Squares Problems A Tutorial on Recursive methods in Linear Least Squares Problems by Arvind Yedla 1 Introduction This tutorial motivates the use of Recursive Methods in Linear Least Squares problems, specifically Recursive

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 9 0 Linear regression Coherence Φ ( ) xy ω γ xy ( ω) = 0 γ Φ ( ω) Φ xy ( ω) ( ω) xx o noise in the input, uncorrelated output noise Φ zz Φ ( ω) = Φ xy xx ( ω )

More information

Subspace Identification

Subspace Identification Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

Kalman Filter and Parameter Identification. Florian Herzog

Kalman Filter and Parameter Identification. Florian Herzog Kalman Filter and Parameter Identification Florian Herzog 2013 Continuous-time Kalman Filter In this chapter, we shall use stochastic processes with independent increments w 1 (.) and w 2 (.) at the input

More information

Lecture 5 Linear Quadratic Stochastic Control

Lecture 5 Linear Quadratic Stochastic Control EE363 Winter 2008-09 Lecture 5 Linear Quadratic Stochastic Control linear-quadratic stochastic control problem solution via dynamic programming 5 1 Linear stochastic system linear dynamical system, over

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS ECE5550: Applied Kalman Filtering 9 1 SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS 9.1: Parameters versus states Until now, we have assumed that the state-space model of the system

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Recursive Least Squares for an Entropy Regularized MSE Cost Function Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University

More information

ANALYSIS OF NONLINEAR PARTIAL LEAST SQUARES ALGORITHMS

ANALYSIS OF NONLINEAR PARTIAL LEAST SQUARES ALGORITHMS ANALYSIS OF NONLINEAR PARIAL LEAS SQUARES ALGORIHMS S. Kumar U. Kruger,1 E. B. Martin, and A. J. Morris Centre of Process Analytics and Process echnology, University of Newcastle, NE1 7RU, U.K. Intelligent

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

MODEL PREDICTIVE CONTROL and optimization

MODEL PREDICTIVE CONTROL and optimization MODEL PREDICTIVE CONTROL and optimization Lecture notes Model Predictive Control PhD., Associate professor David Di Ruscio System and Control Engineering Department of Technology Telemark University College

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

Industrial Model Predictive Control

Industrial Model Predictive Control Industrial Model Predictive Control Emil Schultz Christensen Kongens Lyngby 2013 DTU Compute-M.Sc.-2013-49 Technical University of Denmark DTU Compute Matematiktovet, Building 303B, DK-2800 Kongens Lyngby,

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

FRTN 15 Predictive Control

FRTN 15 Predictive Control Department of AUTOMATIC CONTROL FRTN 5 Predictive Control Final Exam March 4, 27, 8am - 3pm General Instructions This is an open book exam. You may use any book you want, including the slides from the

More information

Nonlinear Model Predictive Control Tools (NMPC Tools)

Nonlinear Model Predictive Control Tools (NMPC Tools) Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator

More information

Introduction to Nonlinear Control Lecture # 4 Passivity

Introduction to Nonlinear Control Lecture # 4 Passivity p. 1/6 Introduction to Nonlinear Control Lecture # 4 Passivity È p. 2/6 Memoryless Functions ¹ y È Ý Ù È È È È u (b) µ power inflow = uy Resistor is passive if uy 0 p. 3/6 y y y u u u (a) (b) (c) Passive

More information

THE TRANSLATION PLANES OF ORDER 49 AND THEIR AUTOMORPHISM GROUPS

THE TRANSLATION PLANES OF ORDER 49 AND THEIR AUTOMORPHISM GROUPS MATHEMATICS OF COMPUTATION Volume 67, Number 223, July 1998, Pages 1207 1224 S 0025-5718(98)00961-2 THE TRANSLATION PLANES OF ORDER 49 AND THEIR AUTOMORPHISM GROUPS C. CHARNES AND U. DEMPWOLFF Abstract.

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Linear Models for Regression. Sargur Srihari

Linear Models for Regression. Sargur Srihari Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

Application of Modified Multi Model Predictive Control Algorithm to Fluid Catalytic Cracking Unit

Application of Modified Multi Model Predictive Control Algorithm to Fluid Catalytic Cracking Unit Application of Modified Multi Model Predictive Control Algorithm to Fluid Catalytic Cracking Unit Nafay H. Rehman 1, Neelam Verma 2 Student 1, Asst. Professor 2 Department of Electrical and Electronics

More information

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0 Lecture 22 Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) Recall a few facts about power series: a n z n This series in z is centered at z 0. Here z can

More information

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise

More information

Introduction to Algebraic and Geometric Topology Week 14

Introduction to Algebraic and Geometric Topology Week 14 Introduction to Algebraic and Geometric Topology Week 14 Domingo Toledo University of Utah Fall 2016 Computations in coordinates I Recall smooth surface S = {f (x, y, z) =0} R 3, I rf 6= 0 on S, I Chart

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

26. Filtering. ECE 830, Spring 2014

26. Filtering. ECE 830, Spring 2014 26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem

More information

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004 MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical

More information

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence 1 EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence Jean Walrand I. SUMMARY Here are the key ideas and results of this important topic. Section II reviews Kalman Filter. A system is observable

More information

Laplace Transforms Chapter 3

Laplace Transforms Chapter 3 Laplace Transforms Important analytical method for solving linear ordinary differential equations. - Application to nonlinear ODEs? Must linearize first. Laplace transforms play a key role in important

More information

Math 4263 Homework Set 1

Math 4263 Homework Set 1 Homework Set 1 1. Solve the following PDE/BVP 2. Solve the following PDE/BVP 2u t + 3u x = 0 u (x, 0) = sin (x) u x + e x u y = 0 u (0, y) = y 2 3. (a) Find the curves γ : t (x (t), y (t)) such that that

More information

Linearization problem. The simplest example

Linearization problem. The simplest example Linear Systems Lecture 3 1 problem Consider a non-linear time-invariant system of the form ( ẋ(t f x(t u(t y(t g ( x(t u(t (1 such that x R n u R m y R p and Slide 1 A: f(xu f(xu g(xu and g(xu exist and

More information

Laplace Transforms. Chapter 3. Pierre Simon Laplace Born: 23 March 1749 in Beaumont-en-Auge, Normandy, France Died: 5 March 1827 in Paris, France

Laplace Transforms. Chapter 3. Pierre Simon Laplace Born: 23 March 1749 in Beaumont-en-Auge, Normandy, France Died: 5 March 1827 in Paris, France Pierre Simon Laplace Born: 23 March 1749 in Beaumont-en-Auge, Normandy, France Died: 5 March 1827 in Paris, France Laplace Transforms Dr. M. A. A. Shoukat Choudhury 1 Laplace Transforms Important analytical

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

Infinite Series. 1 Introduction. 2 General discussion on convergence

Infinite Series. 1 Introduction. 2 General discussion on convergence Infinite Series 1 Introduction I will only cover a few topics in this lecture, choosing to discuss those which I have used over the years. The text covers substantially more material and is available for

More information

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0 Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x

More information

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang 1-1 Course Description Emphases Delivering concepts and Practice Programming Identification Methods using Matlab Class

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

CS540 Machine learning Lecture 5

CS540 Machine learning Lecture 5 CS540 Machine learning Lecture 5 1 Last time Basis functions for linear regression Normal equations QR SVD - briefly 2 This time Geometry of least squares (again) SVD more slowly LMS Ridge regression 3

More information

Lyapunov Stability Analysis: Open Loop

Lyapunov Stability Analysis: Open Loop Copyright F.L. Lewis 008 All rights reserved Updated: hursday, August 8, 008 Lyapunov Stability Analysis: Open Loop We know that the stability of linear time-invariant (LI) dynamical systems can be determined

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725 Gradient Descent Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: canonical convex programs Linear program (LP): takes the form min x subject to c T x Gx h Ax = b Quadratic program (QP): like

More information

Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models

Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models 1 Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models S. Sundhar Ram, V. V. Veeravalli, and A. Nedić Abstract We consider a network of sensors deployed to sense a spatio-temporal

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Parts Manual. EPIC II Critical Care Bed REF 2031

Parts Manual. EPIC II Critical Care Bed REF 2031 EPIC II Critical Care Bed REF 2031 Parts Manual For parts or technical assistance call: USA: 1-800-327-0770 2013/05 B.0 2031-109-006 REV B www.stryker.com Table of Contents English Product Labels... 4

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. EE 5 Homewors Fall 04 Updated: Sunday, October 6, 04 Some homewor assignments refer to the textboos: Slotine and Li, etc. For full credit, show all wor. Some problems require hand calculations. In those

More information

Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina

Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina Model Identification and Data Analysis (Academic year 2015-2016) Prof. Sergio Bittanti Outline Data generation

More information

Hand Written Digit Recognition using Kalman Filter

Hand Written Digit Recognition using Kalman Filter International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 5, Number 4 (2012), pp. 425-434 International Research Publication House http://www.irphouse.com Hand Written Digit

More information

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan Linear Regression CSL603 - Fall 2017 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis Regularization

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

Stability of Parameter Adaptation Algorithms. Big picture

Stability of Parameter Adaptation Algorithms. Big picture ME5895, UConn, Fall 215 Prof. Xu Chen Big picture For ˆθ (k + 1) = ˆθ (k) + [correction term] we haven t talked about whether ˆθ(k) will converge to the true value θ if k. We haven t even talked about

More information

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan Linear Regression CSL465/603 - Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis

More information