Räkneövningar Empirisk modellering

Size: px
Start display at page:

Download "Räkneövningar Empirisk modellering"

Transcription

1 Räkneövningar Empirisk modellering Bengt Carlsson Systems and Control Dept of Information Technology, Uppsala University 5th February 009 Abstract Räkneuppgifter samt lite kompletterande teori.

2 Contents L-Linear regression 3. Problems Solutions L-Stochastic processes and discrete time system 7. Problems Solutions Egenskaper kovariansfunktioner Exempel kovariansfunktioner L3 and L4-Parameter estimations 3. Problems Solutions L5- Some additional problems 8 4. Complementary theory - Analysis of the least squares estimate Results from Linear Regression Results for the case when A3 does not hold Bias, variance and mean squared error Problems Solutions

3 L-Linear regression. Problems Problem. A linear trend model a) Consider a linear regression model ˆÝ(Ø) + Ø Calculate the least squares estimate for the following two cases. The data are Ý() Ý() Ý(). For this case, use Ë Ó È Ý(Ø), Ë È ØÝ(Ø). The data are Ý( ) Ý( +) Ý(). For this case, use Ë Ó È Ø Ý(Ø), Ë È Ø ØÝ(Ø) Hints: Ø Ø ( + ) ( + )( + ) 6 b) Suppose that the parameter is estimated with ˆ Ë Ó ˆ Ë Ó + case above case above The parameter is estimated by the least squares method using the model structure: ˆÝ(Ø) ˆ Ø Calculate ˆ for the two cases and compare with the estimate obtained in (a). c) Assume that the data Ý() Ý() Ý() is generated by Ý(Ø) Ó + Ó Ø + (Ø) where (Ø) is white noise with variance. Calculate the variance of the quantity (Ø) ˆ + ˆØ. What is the variance for Ø and Ø? For which Ø is the variance minimal? Hint: Let ( ) Ì and ³(Ø) ( Ø) Ì. Then where È var ˆ. var (Ø) ³(Ø) Ì È ³(Ø) ) Some accuracy results for linear trend models 3

4 a) Assume that the data Ý() Ý() Ý() is generated by Ý(Ø) Ó + Ó Ø + (Ø) where (Ø) is white noise with variance. The parameters in a linear trend model ˆÝ(Ø) + Ø are estimated with the least squares method. Calculate the variance of ˆ. b) Assume that we differenced the data and introduce a new signal We then have that the data Þ(Ø) obeys Þ(Ø) Ý(Ø) Ý(Ø ) Ø 3 Þ(Ø) Ó + Û(Ø) () where the new noise source Û(Ø) (Ø) (Ø ). The parameter Ó may then be estimated from the following model ˆÞ(Ø) Calculate the variance of ˆ and compare with the accuracy obtained in (a). Hint: Note that the noise Û(Ø) in () is not white. Hence the expression (??) need to be used when calculating the variance. 3. The problem of collinearity. Consider the following model ˆÝ(Ø) Ù (Ø) + Ù (Ø) Here Ù (Ø) and Ù (Ø) are two measured input signals. Suppose that the data is generated by Ý(Ø) Ó Ù + Ù where Ù Ã and Ù Ä, that is two constant signals with amplitude à and Ä are used as input signals. Show that det Φ Ì Φ 0, and hence the least squares method can not be used. Remark: The columns in Φ must be linearly independent for (Φ Ì Φ) to exist.. Solutions ) In general we have ˆ [ ³(Ø)³ Ì (Ø)] ³(Ø)Ý(Ø)) [Φ Ì Φ] Φ Ì () For the considered model ( ) Ì, and ³ ( Ø) Ì. Case (i). Data Ý() Ý(). 4

5 ˆ È È Ø È Ø È Ø ËÓ Ë ( ) [( + )Ë Ó 6Ë ] 6 ( )(+) [Ë ( + )Ë Ó ] (+) (+) (+)(+) 6 ËÓ Ë Case (ii). Data Ý() Ý(). This gives + data points. All sums will have the form È Ø. + 0 ËÓ ˆ b) The model Ý(Ø) case (i): 0 (+)(+) 3 Ë ˆ Ø gives the least squares estimate ˆ È Ø(Ý(Ø) ˆ) È Ø Ë ˆ È Ø È Ø + Ë Ó 3 (+)(+) Ë ˆ 3 ( + )( + ) [Ë ( + )Ë Ó ] which is not equal to the solution in a) and is therefore wrong. case (ii): ˆ Ë 0 È Ø 3 ( + )( + ) Ë which is equal to the solution in a) and is therefore correct. (3) c) È [ ³(Ø)³ Ì (Ø)] È È È Ø È Ø Ø (+) Straightforward calculations using the È above and ³ ( Ø) Ì give (+) (+)(+) 6 (4) and var (Ø) ³(Ø) Ì È ³(Ø) var () var () ( + )( ) [Ø Ø( + ) + ( + )( + ) ( + )( ) [ (+)+ ] 4 6 ( + )( + ) ] 6 when N is large ( + )( ) [ ( + )( + ) (+)+ ] var () 4 6 The Ø giving minimal variance is obtained by setting var (Ø) Ø [Ø ( + )] 0 ( + )( ) 5 when N is large

6 which gives Ø +. Hence, the minimal variance is in the middle of the observation interval. Furthermore var ( + ) + ) [( ( + )( ) 4 ( + ) + For large data sets, the standard deviation decreases from Õ () to () in the middle of the interval. ) For a linear regression we have that ( + )( + ) ] 6 Õ () at () and covˆ [Φ Ì Φ] [ ³(Ø)³ Ì (Ø)] For a linear trend ³ ( Ø) Ì. This gives covˆ È È È Ø È Ø Ø (+) (+) (+)(+) 6 The variance of ˆ is then found (the matrix above need to be inverted) to be ( ). b) When the data is differentiated we have that the system is Þ(Ø) Ó + Û(Ø) where Û(Ø) (Ø) (Ø ). We have Ê Û (0) Û (Ø) [(Ø) (Ø )] Ê Û () Û(Ø + )Û(Ø) ((Ø + ) (Ø))((Ø) (Ø Ê Û () 0 )) The noise is not white and in order to calculate the variance we need to calculate Ê ÛÛ Ì where Û (Û() Û() Û( )) Ì. See Section 4.3 in Linear regression. Note that when the data is differentiated one data point is lost. We have ¾ 0 0 Ê For the model ˆ Þ(Ø) we have ³(Ø) and Φ ( ) Ì (with rows). We can now calculate the variance of ˆ (the least squares estimate) from the covariance matrix (which becomes the variance since ˆ is a scalar) covˆ varˆ (Φ Ì Φ) Φ Ì ÊΦ(Φ Ì Φ) (5) ( ) 6

7 It is easily seen that this expression is larger than the one in a). 3) We have ³(Ø) (Ù (Ø) Ù (Ø)) Ì, with Ù (Ø) Ã and Ù (Ø) Ä. We assume the number of data to be. This gives the () matrix: and Φ ¾ Ã. Ã Ä. Ä Ã Φ Ì ÃÄ Φ ÃÄ Ä We then see that det(φ Ì Φ) 0 and hence the LS method can not be used. It is not possible to determine the parameters uniquely from this data set (which also should be intiutively clear). L-Stochastic processes and discrete time system. Problems. Static gain and stability. Consider the discrete time system Ý(Ø) À(Õ)Ù(Ø) where À(Õ) 0 + Õ 0Õ Calculate, poles, zeros and static gain of the system. Is the system stable?. Spectrum. The following stochastic process is given: Ý(Ø) 0Ý(Ø ) (Ø) 0(Ø ) where (Ø) is white noise with zero mean and variance. a) Determine the spectrum of Ý; Φ Ý (). b) Determine the cross spectrum Φ Ý (). 3. Covariance function and spectrum. Consider the following system Ý(Ø) À(Õ)Ù(Ø) where À(Õ) Õ + Õ The input signal has the following covariance function Ê Ù (0), Ê Ù () Ê Ù ( ) 05, Ê Ù () 0 for. Calculate the spectrum of the output signal. 7

8 4. Covariance functions Calculate the covariance function for the following stochastic processes where (Ø) is white noise with variance. a) b) Ý(Ø) (Ø) + (Ø ) Ý(Ø) + Ý(Ø ) (Ø). Solutions ) À(Õ) 0 + Õ 0Õ 0Õ + Õ 0 À(Þ) 0Þ + Þ 0 We immediately see that the system has one zero in Þ 0 and one pole in Þ 0. Informally we could as well solve the roots with the Õ-operator (but not with Õ -operator). The static gain is obtained by setting Õ (or Þ ). We have À() The system is stable since all poles are inside the unit circle. ) We can write the system as À(Õ) 0Õ 0Õ Õ 0 Õ 0 a) First note that since (Ø) is white noise whith variance Φ (). By using we get Φ Ý () À( Û ) Φ () Φ Ý () 0 0 ( 0)( 0) 0 0 cos() ( 0)( 0) cos() b) The cross spectra for a discrete time system À(Õ) is given by which directly gives Φ Ý () À( Û )Φ () Φ Ý () 0 0 3) The definition of spectral density (spectrum) is Φ Ù () ½ ½ Ê Ù () 8

9 The given covariance function gives and the output spectral density is Φ Ù () cos() Φ Ý () À( Û Φ Ù () À( Û )À( Û )Φ Ù () 4 a) The MA() process is Û + Û + (+cos()) ( + cos()) + + cos() Ý(Ø) (Ø) + (Ø ) and we directly get Ê Ý (0) Ý (Ø) ((Ø) + (Ø ))((Ø) + (Ø )) ( + ) Ê Ý () Ý(Ø + )Ý(Ø) ((Ø + ) + (Ø))((Ø) + (Ø )) Ê Ý () Ý(Ø + )Ý(Ø) ((Ø + ) + (Ø + ))((Ø) + (Ø )) 0 for b) The covariance function for the AR() process Ý(Ø) Ý(Ø ) (Ø) can be solved (this also holds for a general AR(n) process) by the so called Yule Walker equations (not a course requirement). Basically the idea is to multiply the AR process with Ý(Ø ) and take expectations. We have to distinguish between 0, and 0. For 0 we get Ý(Ø)(Ý(Ø) + Ý(Ø )) (Ø)Ý(Ø) Taking expectations give For 0 we get Ê Ý (0) + Ê Ý () Ý(Ø )(Ý(Ø) + Ý(Ø )) (Ø)Ý(Ø ) Taking exectations give Ê Ý () + Ê Ý ( ) 0 We hence end up with a lset of linear equations. For 0 we get Ê Ý (0) Ê Ý () 0 (6) with solution It is easy to see that Ê Ý (0) Ê Ý () ( ) Ê Ý () ( ) 9

10 .3 Egenskaper kovariansfunktioner Låt Ü(Ø) vara en stokastisk stationär process med medelvärde EÜ(Ø) 0. Processens kovariansfunktion definieras av Följande relationer gäller Ê(0) Ê() 0. Ê() [Ü(Ø + )Ü(Ø)] Bevis: [Ü(Ø + ) Ü(Ø)] [Ü(Ø + )] + [Ü(Ø)] [Ü(Ø + )Ü(Ø)] Ê(0) + Ê(0) Ê() (Ê(0) Ê()). Uttrycket [Ü(Ø + ) Ü(Ø)] är alltid positivt alltså är även Ê(0) Ê() positivt varav påståendet följer. Ê(0) Ê(). Följer direkt av ovanstående bevis. Ê( ) Ê(), dvs kovariansfunktionen är symmetrisk kring origo. Bevis Ê( ) [Ü(Ø )Ü(Ø)] [Ü(Ø)Ü(Ø )] sätt Ø + [Ü( + )Ü( )] Ê(). Om Ê() Ê(0) för något 0 så är x(t) periodisk.bevis: utlämmnas. Korskovariansfunktionen beskriver samvariatonen mellan två stokastiska processer Ü(Ø) och Ý(Ø) och definieras Vi ger följande relationer utan bevis: Ê ÜÝ () Ê ÝÜ ( ) Ê ÜÝ () Ê ÜÝ ( ) i allmänhet. Ê ÜÝ () [Ü(Ø + )Ý(Ø)] Ê ÜÝ () 0 för alla µ Ü och Ý är okorrelerade. En process är stationär om dess egenskaper (fördelningar) ej beror av absolut tid. 0

11 .4 Exempel kovariansfunktioner Det är inget tentakrav att kunna räkna ut komplicerade kovariansfunktioner. Till tentan ska man kunna räkna ut kovariansfunktionen för en MA process (av godtycklig ordning). Mera komplicerade kovariansuttryck ges som ledning. Nedan ges ett par exempel på några mera komplicerade kaovariansuutryck. Som vanligt betecknar (Ø) vitt brus med medelvärde 0 och varians. Vi använder också Ê Ý () Ý(Ø + )Ý(Ø). ARMA(,) processen Ý(Ø) + Ý(Ø ) (Ø) + (Ø ) Ê Ý (0) + Ê Ý () ARMAX(,,) processen ( )( ) ( )( ) Ê Ý () ( ) Ý(Ø) + Ý(Ø ) Ù(Ø ) + (Ø) + (Ø ) Insignalen är vit, med medelvärde 0 och varians Ù Ê Ý (0) + ( + ) Ê Ý () + ( )( ) Ý(Ø)Ù(Ø) 0 Ý(Ø)Ù(Ø ) Notera att 0 ger ARMA(,) processen och 0 ger en ARX(,) process.

12 3 L3 and L4-Parameter estimations 3. Problems ) Criteria with constraints Consider the following scalar non-linear minimization problem where Î () The following constraint is also given: min Î () (Ý(Ø) ˆÝ(Ø )) 0 Assume that solutions to Î () 0 have been found. Describe how the minimization problem should be solved in principle. ) Calculating the least squares estimate for ARX models. Consider the following ARX model: ((Ø) is white noise with zero mean) Ý(Ø) + Ý(Ø ) Ù(Ø ) + (Ø) (7) Assume that available data are : Ý() Ù() Ý() Ù() Ý(0) Ù(0) and the following sums have been calculated: 0 Ø Ý (Ø ) 50 0 Ø Which value of ( ) Ì 0 Ø Ý(Ø)Ý(Ø ) 45 Ý(Ø )Ù(Ø ) 0 0 Ø Ý(Ø)Ù(Ø 0 Ø Ù (Ø ) 0 minimizes the quadratic criteria Î () Ø (Ý(Ø) ˆÝ(Ø )) where ˆÝ(Ø ) is the predictor obtained from the ARX model (7)? ) 0 3) Data with non zero mean. Assume that the data from a system (normally the data is also noise corrupted but that is not considered in this example) is given by (Õ)Ý(Ø) (Õ)Ù(Ø) + à (8) where (Õ) + Õ + Ò Õ Ò,, (Õ) Õ + Ò Õ Ò, and à is an unknown constant.

13 a) Show that by using the following transformation of the data (Õ)Ù(Ø) ( Õ )Ù(Ø) Ù(Ø) Ù(Ø ) (Õ)Ý(Ø) ( Õ )Ý(Ø) Ý(Ø) Ý(Ø ) as new input and output signals, the standard LS (least squares) method can be used to find the parameters in (Õ) and (Õ). b) Show that the constant K easily can be included in the LS estimate for the model (8). c) What is the standard procedure to deal with data with non zero mean? 4) The problem with feedback. Consider the following system: ((Ø) is white noise with zero mean) Ý(Ø) + Ý(Ø ) Ù(Ø ) + (Ø) (9) Assume that the system is controlled with a proportional controller Ù(Ø) ÃÝ(Ø) Show that È È ³(Ø)³Ì (Ø) becomes singular! 5) Optimal input signal. The following system is given: Ý(Ø) Ó Ù(Ø) + Ù(Ø ) + (Ø) (Ø) is white noise with zero mean and variance. The parameters are estimated with the least squares method. Consider the case when the number of data points goes to infinity (in practice, this means that we have many data points available) a) Show that var(ˆ Ó ) and var(ˆ ) only depends on the following values of the covariance function: Ê Ù (0) Ù (Ø) lim ½ Ê Ù () Ù(Ø)Ù(Ø ) lim ½ Ù () Ù()Ù( ) b) Assume that the energy of the input signal is constrained to Ê Ù (0) Ù (Ø) lim ½ Ù () 3

14 Determine Ê Ù (0) and Ê Ù () so that the variance of the parameter estimates are minimized. 6) The following system is given: Ý(Ø) Ù(Ø ) + Ù(Ø ) + (Ø) (Ø) is white noise with zero mean and variance. Assume that the number of data points goes to infinity. a) Assume that Ù(Ø) is white noise with variance and zero mean. Show that the least squares estimate converge to the true system parameters. b) Assume that Ù(Ø) is a unit step: Ù(Ø) 0, Ø 0, and Ù(Ø), Ø, show that the matrix Ê lim ½ È ³(Ø)³Ì (Ø) becomes singular. 7) The following system is given: Ý(Ø) Ù(Ø ) + Ù(Ø ) + (Ø) (Ø) is white noise with zero mean and variance. The following predictor ˆÝ(Ø) Ù(Ø ) is used to estimate the parameter with the least squares (LS) method. Calculate the LS estimate of (expressed in and ) as the number of data points goes to infinity for the cases: a) The input signal Ù(Ø) is white noise. b) The input signal is a sinusoidal Ù(Ø) sin( Ø) wich has the covariance function Ê Ù () cos( ) In general, we will also assume that (Ø) and Ù(Ø) are uncorrelated if not explicitely stated otherwise. 4

15 3. Solutions ) Let be the parameters which gives Î () 0. Check the values of Î ( ), Î (0) and Î () and select the values of which minimizes Î. ) The predictor for the ARX model is ˆÝ(Ø) ³ Ì (Ø) where ³(Ø) ( Ý(Ø ) Ù(Ø )) Ì and ( ) Ì. The least squares estimate is ˆ [ ³(Ø)³ Ì (Ø)] È 0 È 0 Ø Ý (Ø ) ³(Ø)Ý(Ø)) Ø Ý(Ø )Ù(Ø ) È È 0 Ø Ý(Ø )Ù(Ø ) Ø Ù (Ø ) 3a) Multiplying the left and right hand side of the system with (Õ) gives (Õ)Ý(Ø) (Õ)Ù(Ø) + à (Õ)(Õ)Ý(Ø) (Õ)(Õ)Ù(Ø) + (Õ)Ã È 0 Ý(Ø )Ý(Ø) È Ø 0 but since à is a constant, (Õ)Ã( Õ )à à à 0 and hence (Õ)( (Õ)Ý(Ø)) (Õ)( (Õ)Ù(Ø)) Thus if we use the differentiated input and output signals we can use the standard LS estimate to estimate and. Remark: Any filter Ä(Õ) with Ä() 0 would remove the constant Ã. Note that Ä() 0 means zero steady state gain. b) Use the regression vector ³(Ø) ( Ý(Ø ) Ý(Ø ) Ý(Ø Ò) Ù(Ø ) Ù(Ø ) Ù(Ø Ò) ) Ì and ( Ò Ò Ã) Ì Note that we can view the system consisting of two input signals; Ù(Ø) and. c) Remove the mean from the data, that is use the new signals: Ý(Ø) Ý(Ø) Ù(Ø) Ù(Ø) Ý() Ù() 5

16 4a) We directly get È [ ³(Ø)³ Ì (Ø)] È Ý (Ø ) È Ý(Ø )Ù(Ø ) È Ý(Ø )Ù(Ø ) È Ù (Ø ) È Ý (Ø ) È Ã Ã È Ý (Ø ) Ã È Ý (Ø ) Ý (Ø ) In the third equality we have used Ù(Ø) ÃÝ(Ø). It is directly seen that the matrix is singular (det È 0) hence this experimental condition can not be used to estimate the parameters uniquely. This can also be seen if we look at the predictor ˆÝ(Ø) Ý(Ø ) + Ù(Ø ). With the controller Ù(Ø) ÃÝ(Ø) the predictor becomes ˆÝ(Ø) Ý(Ø ) ÃÝ(Ø ) (Ã+)Ý(Ø ) and we see that the predictor does not uniqely depends on and. 5) We have that (see Linear Regression) cov(ˆ) [ ( Ê) ³(Ø)³ Ì (Ø)] [ ³(Ø)³ Ì (Ø)] With ³(Ø) (Ù(Ø) Ù(Ø )) Ì we get ÊÙ (0) Ê Ê Ù () Ê Ù () Ê Ù (0) ½ [[³(Ø)³Ì (Ø)] and as ½ Hence, cov(ˆ) var(ˆ Ó ) var(ˆ ) ÊÙ (0) Ê Ù () Ê Ù () Ê Ù (0) Ê Ù (0) Ê Ù (0) Ê Ù () b) It is seen directly (note that Ê Ù (0) Ê Ù ()) that the variances are minimized for Ê Ù (0) and Ê Ù () 0. One example of a signal that fulfills this condition is white noise with unit variance. 6) The predictor is given by ˆÝ(Ø) ³ Ì (Ø) where ³(Ø) (Ù(Ø ) Ù(Ø )) Ì and ( ) Ì. The least squares estimate is (we normalize with expression as ½) As ½ ˆ [ ³(Ø)³ Ì (Ø)] ˆ ½ ( Ê) ³(Ø)Ý(Ø) 6 since we then get a feasible ³(Ø)Ý(Ø))

17 where Ê ³(Ø)³ Ì (Ø) cf the previous problem. ÊÙ (0) Ê ˆ Ù () Ê Ù () Ê Ù (0) Since Ù(Ø is white noise Ê Ù () 0 and ÊÝÙ () Ê ÝÙ () Ê ÝÙ () Ý(Ø)Ù(Ø ) [ Ù(Ø + Ù(Ø ) + (Ø)]Ù(Ø ) Ê Ù (0) Ê ÝÙ () Ý(Ø)Ù(Ø ) [ Ù(Ø + Ù(Ø ) + (Ø)]Ù(Ø ) Ê Ù (0) the estimate converges to ÊÙ (0) 0 ˆ ½ 0 Ê Ù (0) ÊÙ (0) ÊÙ(0) which is expected since we have a FIR model (wich could be interpreted as a linear regression model) and model structure is correct. b) We first calculate ³(Ø)³ Ì (Ø) and we see that È È Ù (Ø ) È Ù(Ø )Ù(Ø ) È Ù (Ø ) Ê Ù(Ø )Ù(Ø ) which is singular. This means that a step gives too poor excitation of the system, asymptotically the matrix (which should be inverted in the least squares method) becomes singular. 7) In this case ³(Ø) (Ù(Ø )) and. Asymptotically in we have ˆ ½ [³(Ø)³ Ì (Ø)] ³(Ø)Ý(Ø) For this simple predictor we have: ³(Ø)³ Ì (Ø) Ù (Ø ) Ê Ù (0) ³(Ø)Ý(Ø) Ý(Ø)Ù(Ø ) Ê ÝÙ () and by using the system generating the data Ê ÝÙ () Ý(Ø)Ù(Ø ) [ Ù(Ø + Ù(Ø )+(Ø)]Ù(Ø ) Ê Ù (0)+ Ê Ù () which gives ˆ ½ ˆ ½ + Ê Ù () Ê Ù (0) a) If Ù(Ø) is white noise Ê Ù () 0 and we get ˆ ½. b) For Ù(Ø) beeing a sinusoid, ˆ ½ + cos Û 7

18 4 L5- Some additional problems 4. Complementary theory - Analysis of the least squares estimate 4.. Results from Linear Regression The accuracy result is based on the following assumptions: Assumption A. Assume that the data are generated by ( the true system ): Ý(Ø) ³ Ì (Ø) Ó + (Ø) Ø (0) where (Ø) is a nonmeasurable disturbances term to be specified below. In matrix form, (0) reads Φ Ó + e () where e [() ()] Ì. Assumption A. It is assumed that (Ø) is a white noise process 3 with variance. Assumption A3. It is finally assumed that ³(Ø)( ) 0 for all Ø and. This means that the regression vector is not influenced (directly or indirectly) by the noise source (Ø) In the material Linear Regression it was shown that if Assumptions A-A3 hold then. The least squares estimate ˆ is an unbiased estimate of Ó, that is ˆ Ó.. The uncertainty of the least squares estimate as expressed by the covariance matrix È is given by È cov ˆ (ˆ ˆ)(ˆ ˆ) Ì (ˆ Ó )(ˆ Ó ) Ì (Φ Ì Φ) [ ³(Ø)³ Ì (Ø)] 4.. Results for the case when A3 does not hold For the case when A3 does not hold we have that. The least squares estimate ˆ is consistent: where is the number of data points. ˆ Ó as ½ () 3 A white noise process (Ø) is a sequence of random variables that are uncorrelated, have mean zero, and a constant finite variance. Hence, (Ø) is a white noise process if (Ø) 0, (Ø), and (Ø)() 0 for Ø not equal to. 8

19 . The covariance matrix È is given by È cov ˆ [³(Ø)³Ì (Ø)] as ½ (3) Remarks: Two typical examples when A3 does not hold are when the system is an AR-process or an ARX-process (we then have values of the output in the regressor vector. The results only holds asymptotically in. In practice this means that we need to have many data (some hundreds are typically enough) points for the estimate to be reliable (and also to get reliable estimate of the covariance matrix). If the noise is not white the estimate will in general not be consistent (in contrast to when A3 holds- see Linear Regression ). Results for general linear models are presented on page in the text book Bias, variance and mean squared error Let ˆ be a scalar estimate of the true parameter Ó. The bias is defined as The variance is given by The Mean Squared Error (MSE) is given by bias(ˆ) ˆ Ó (4) var(ˆ) (ˆ ˆ) (5) MSE(ˆ) (ˆ Ó ) var(ˆ) + [bias(ˆ)] (6) In practice we want to have an estimate with as small MSE as possible. In some cases this may mean that we accept a small bias if the variance of the estimate can be reduced. 4. Problems ) Predictor using exponential smoothing A simple predictor for a signal Ý(Ø) is the so-called exponential smoothing which is given by ˆÝ(Ø) «Ý(Ø ) «Õ a) Show that if Ý(Ø) Ñ for all Ø, the predictor will in steady state (stationarity) be equal to Ñ. 9

20 b) For which ARMA model is the predictor optimal? Hint: Rewrite the predictor in the form ˆÝ(Ø) Ä(Õ)Ý(Ø) and compare with the predictor for an ARMA model (Õ)Ý(Ø) (Õ)Ý(Ø). ) Cross correlation for LS estimate of ARX parameters. Consider the standard least squares estimate of the parameters in an ARX model: (Õ)Ý(Ø) (Õ)Ù(Ø) + (Ø) where (Õ) + Õ + + Ò Õ Ò och (Õ) Õ + + Ò Õ Ò. The estimate of the cross correlation between residuals and inputs is given by ˆÊ Ù () (Ø)Ù(Ø )) where (Ø) Ý(Ø) ˆÝ(Ø) Ý(Ø) ³ Ì (Ø)ˆ is the prediction error. Show that the least squares estimate gives ˆÊ Ù () 0 Ò Hint: Show that the least squares estimate gives È ³(Ø) (Ø) 0. 3) The variance increases if more parameters than needed are estimated! Assume that data from an AR() process ( The system ) is collected: Ý(Ø) + Ó Ý(Ø ) (Ø) where (Ø) is white noise with zero mean and variance. The system is stable wich means that Ó Consider the following two predictors M ˆÝ(Ø) Ý(Ø ) M ˆÝ(Ø) Ý(Ø ) Ý(Ø ) where the parameters for each predictor is estimated with the least squares method. It can easily be shown that for M: ˆ Ó and for M: ˆ Ó and ˆ 0, as the number of data points ½. Hence both estimate gives consistent estimate. Show that the price to pay for estimating too many parameters is that var(ˆ ) var(ˆ) as ½. Hint: For the AR() process we have that Ê Ý () Ý(Ø + )Ý(Ø) ( Ó ) Ê Ý (0), where Ê Ý (0) Ó 4) Variance of parameters in an estimated ARX model. Assume that data was collected from the following ARX process ( The system ) Ý(Ø) + Ó Ý(Ø ) Ó Ù(Ø )(Ø) 0

21 where (Ø) is white noise with zero mean and variance. The system is stable wich means that Ó. The input signal is uncorrelated with (Ø), and is white noise with zero mean and variance. The parameters in the following predictor ˆÝ(Ø) Ý(Ø ) + Ù(Ø ) are estimated with the least squares method. Calculate the asymptotic (in number of data points ) variance of the parameter estimates. Hint: Ê Ý (0) Ý (Ø) Ó + Ó

22 4.3 Solutions a) We rewrite the predictor in standard form ˆÝ(Ø) Õ ( «) Ý(Ø) Ä(Õ)Ý(Ø) «Õ The static gain is given by Ä() and since we have Ä() the steady state value of the predictor will be ˆÝ(Ø) Ñ. b) For an ARMA process the optimal predictor is ˆÝ(Ø) ( (Õ)Ý(Ø) (Õ)(Ø) (Õ) (Õ) )Ý(Ø) ((Õ) )Ý(Ø) (Õ) (Õ) Hence, we need to find (Õ) and (Õ) so that (Õ) (Õ) (Õ) Õ ( «) «Õ which gives (Õ) «Õ and (Õ) Õ. ) The least squares estimate is given by which can be written as ˆ [ [ With (Ø) Ý(Ø) ³ Ì (Ø)ˆ we get ³(Ø) (Ø) ³(Ø)³ Ì (Ø)] ³(Ø)³ Ì (Ø)]ˆ ³(Ø)[Ý(Ø) ³ Ì (Ø)ˆ] ³(Ø)Ý(Ø) ³(Ø)Ý(Ø)) ³(Ø)Ý(Ø)) (7) ³(Ø)Ý(Ø) 0 In the last equality (7) was used. We thus have 0 ³(Ø) (Ø) ¾ ³(Ø)Ý(Ø) [ Ý(Ø ) Ý(Ø ). Ý(Ø Ò) Ù(Ø ) Ù(Ø ). Ù(Ø Ò) (Ø) ³(Ø)³ Ì (Ø)]ˆ

23 This means that all estimates ˆÊ Ù () (Ø)Ù(Ø )) 0 for Ò Therefore, the values of the estimated cross correlation function ˆÊ Ù () for Ò can not be used to determine if an estimated ARX model is good or bad. They will always be zero. See also the text book on page 368! 3) In general we have for estimated AR-parameters that È cov ˆ [³(Ø)³Ì (Ø)] as ½ For M we have ³(Ø) Ý(Ø ) and ³(Ø)³ Ì (Ø) Ý (Ø ) Ê Ý (0) (see the Hint). Thus var(ˆ) Ó ( Ó ) as ½ For M we have ³(Ø) [ Ý(Ø ) Ý(Ø )] Ì and therefore ³(Ø)³ Ì ÊÝ (0) Ê (Ø) Ý () Ê Ý () Ê Ý (0) This gives and var(ˆ ) [³(Ø)³ Ì (Ø)] ÊÝ (0) Ê Ý () Ê Ý (0) Ê Ý () Ê Ý () Ê Ý (0) Ê Ý (0) Ê Ý (0) Ê Ý () Ê Ý (0) Ê Ý (0) Ó Ê Ý(0) Ê Ý (0)( ) Ó as ½ We have thus shown that var(ˆ ) var(ˆ) as ½. 4) The predictor gives ³(Ø) [ Ý(Ø ) Ù(Ø )] Ì and [ ] Ì. This gives ³(Ø)³ Ì (Ø) The cross covariance between Ý and Ù is Ê Ý (0) Ê ÝÙ (0) Ê ÝÙ () Ê Ù (0) Ê ÝÙ (0) ( Ó Ý(Ø ) + Ó Ù(Ø ) + (Ø))(Ù(Ø)) 0 Ó since Ù(Ø) is white noise (uncorrelated with (Ø)). The (asymptotic) covariance matrix is È cov ˆ Ó + Ó 0 0 Ó 0 Ó + 0 as ½ From the diagonal elements we get the variances: var(ˆ) Ó and var(ˆ) Ó + (as ½). 3

Final exam: Automatic Control II (Reglerteknik II, 1TT495)

Final exam: Automatic Control II (Reglerteknik II, 1TT495) Uppsala University Department of Information Technology Systems and Control Professor Torsten Söderström Final exam: Automatic Control II (Reglerteknik II, TT495) Date: October 22, 2 Responsible examiner:

More information

Final exam: Automatic Control II (Reglerteknik II, 1TT495)

Final exam: Automatic Control II (Reglerteknik II, 1TT495) Uppsala University Department of Information Technology Systems and Control Professor Torsten Söderström Final exam: Automatic Control II (Reglerteknik II, TT495) Date: October 6, Responsible examiner:

More information

Final exam: Computer-controlled systems (Datorbaserad styrning, 1RT450, 1TS250)

Final exam: Computer-controlled systems (Datorbaserad styrning, 1RT450, 1TS250) Uppsala University Department of Information Technology Systems and Control Professor Torsten Söderström Final exam: Computer-controlled systems (Datorbaserad styrning, RT450, TS250) Date: December 9,

More information

Automatic Control III (Reglerteknik III) fall Nonlinear systems, Part 3

Automatic Control III (Reglerteknik III) fall Nonlinear systems, Part 3 Automatic Control III (Reglerteknik III) fall 20 4. Nonlinear systems, Part 3 (Chapter 4) Hans Norlander Systems and Control Department of Information Technology Uppsala University OSCILLATIONS AND DESCRIBING

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

Automatic Control II: Summary and comments

Automatic Control II: Summary and comments Automatic Control II: Summary and comments Hints for what is essential to understand the course, and to perform well at the exam. You should be able to distinguish between continuous-time (c-t) and discrete-time

More information

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,

More information

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS First the X, then the AR, finally the MA Jan C. Willems, K.U. Leuven Workshop on Observation and Estimation Ben Gurion University, July 3, 2004 p./2 Joint

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures FE661 - Statistical Methods for Financial Engineering 9. Model Selection Jitkomut Songsiri statistical models overview of model selection information criteria goodness-of-fit measures 9-1 Statistical models

More information

Homoskedasticity. Var (u X) = σ 2. (23)

Homoskedasticity. Var (u X) = σ 2. (23) Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This

More information

! " # $! % & '! , ) ( + - (. ) ( ) * + / 0 1 2 3 0 / 4 5 / 6 0 ; 8 7 < = 7 > 8 7 8 9 : Œ Š ž P P h ˆ Š ˆ Œ ˆ Š ˆ Ž Ž Ý Ü Ý Ü Ý Ž Ý ê ç è ± ¹ ¼ ¹ ä ± ¹ w ç ¹ è ¼ è Œ ¹ ± ¹ è ¹ è ä ç w ¹ ã ¼ ¹ ä ¹ ¼ ¹ ±

More information

The Statistical Property of Ordinary Least Squares

The Statistical Property of Ordinary Least Squares The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting

More information

Automatic Autocorrelation and Spectral Analysis

Automatic Autocorrelation and Spectral Analysis Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Identification of ARX, OE, FIR models with the least squares method

Identification of ARX, OE, FIR models with the least squares method Identification of ARX, OE, FIR models with the least squares method CHEM-E7145 Advanced Process Control Methods Lecture 2 Contents Identification of ARX model with the least squares minimizing the equation

More information

Read Section 1.1, Examples of time series, on pages 1-8. These example introduce the book; you are not tested on them.

Read Section 1.1, Examples of time series, on pages 1-8. These example introduce the book; you are not tested on them. TS Module 1 Time series overview (The attached PDF file has better formatting.)! Model building! Time series plots Read Section 1.1, Examples of time series, on pages 1-8. These example introduce the book;

More information

Kurskod: TAMS28 MATEMATISK STATISTIK Provkod: TEN1 18 August 2017, 08:00-12:00. English Version

Kurskod: TAMS28 MATEMATISK STATISTIK Provkod: TEN1 18 August 2017, 08:00-12:00. English Version Kurskod: TAMS28 MATEMATISK STATISTIK Provkod: TEN1 18 August 2017, 08:00-12:00 Examiner: Zhenxia Liu (Tel: 070 0895208). Please answer in ENGLISH if you can. a. You are allowed to use a calculator, the

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

The distribution of characters, bi- and trigrams in the Uppsala 70 million words Swedish newspaper corpus

The distribution of characters, bi- and trigrams in the Uppsala 70 million words Swedish newspaper corpus Uppsala University Department of Linguistics The distribution of characters, bi- and trigrams in the Uppsala 70 million words Swedish newspaper corpus Bengt Dahlqvist Abstract The paper describes some

More information

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

Lab 2 Iterative methods and eigenvalue problems. Introduction. Iterative solution of the soap film problem. Beräkningsvetenskap II/NV2, HT (6)

Lab 2 Iterative methods and eigenvalue problems. Introduction. Iterative solution of the soap film problem. Beräkningsvetenskap II/NV2, HT (6) Beräkningsvetenskap II/NV2, HT 2008 1 (6) Institutionen för informationsteknologi Teknisk databehandling Besöksadress: MIC hus 2, Polacksbacken Lägerhyddsvägen 2 Postadress: Box 337 751 05 Uppsala Telefon:

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 7 8 onparametric identification (continued) Important distributions: chi square, t distribution, F distribution Sampling distributions ib i Sample mean If the variance

More information

THIS paper studies the input design problem in system identification.

THIS paper studies the input design problem in system identification. 1534 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 10, OCTOBER 2005 Input Design Via LMIs Admitting Frequency-Wise Model Specifications in Confidence Regions Henrik Jansson Håkan Hjalmarsson, Member,

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Chapter 9 Multivariate time series 2 Transfer function

More information

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012 Mixed-Model Estimation of genetic variances Bruce Walsh lecture notes Uppsala EQG 01 course version 8 Jan 01 Estimation of Var(A) and Breeding Values in General Pedigrees The above designs (ANOVA, P-O

More information

Modeling & Simulation 2018, Fö 1

Modeling & Simulation 2018, Fö 1 Modeling & Simulation 2018, Fö 1 Claudio Altafini Automatic Control, ISY Linköping University, Sweden Course information Lecturer & examiner Claudio Altafini (claudio.altafini@liu.se) Teaching assistants

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

Lecture 11: Regression Methods I (Linear Regression)

Lecture 11: Regression Methods I (Linear Regression) Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear

More information

Finding small factors of integers. Speed of the number-field sieve. D. J. Bernstein University of Illinois at Chicago

Finding small factors of integers. Speed of the number-field sieve. D. J. Bernstein University of Illinois at Chicago The number-field sieve Finding small factors of integers Speed of the number-field sieve D. J. Bernstein University of Illinois at Chicago Prelude: finding denominators 87366 22322444 in R. Easily compute

More information

Lecture 11: Regression Methods I (Linear Regression)

Lecture 11: Regression Methods I (Linear Regression) Lecture 11: Regression Methods I (Linear Regression) 1 / 43 Outline 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear Regression Ordinary Least Squares Statistical

More information

Vectors. Teaching Learning Point. Ç, where OP. l m n

Vectors. Teaching Learning Point. Ç, where OP. l m n Vectors 9 Teaching Learning Point l A quantity that has magnitude as well as direction is called is called a vector. l A directed line segment represents a vector and is denoted y AB Å or a Æ. l Position

More information

Linear Regression. Junhui Qian. October 27, 2014

Linear Regression. Junhui Qian. October 27, 2014 Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency

More information

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN SOLUTIONS

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN SOLUTIONS INSTITUTE AND FACULTY OF ACTUARIES Curriculum 09 SPECIMEN SOLUTIONS Subject CSA Risk Modelling and Survival Analysis Institute and Faculty of Actuaries Sample path A continuous time, discrete state process

More information

F9 F10: Autocorrelation

F9 F10: Autocorrelation F9 F10: Autocorrelation Feng Li Department of Statistics, Stockholm University Introduction In the classic regression model we assume cov(u i, u j x i, x k ) = E(u i, u j ) = 0 What if we break the assumption?

More information

Full file at 2CT IMPULSE, IMPULSE RESPONSE, AND CONVOLUTION CHAPTER 2CT

Full file at   2CT IMPULSE, IMPULSE RESPONSE, AND CONVOLUTION CHAPTER 2CT 2CT.2 UNIT IMPULSE 2CT.2. CHAPTER 2CT 2CT.2.2 (d) We can see from plot (c) that the limit?7 Ä!, B :?7 9 œb 9$ 9. This result is consistent with the sampling property: B $ œb $ 9 9 9 9 2CT.2.3 # # $ sin

More information

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations System Modeling and Identification Lecture ote #3 (Chap.6) CHBE 70 Korea University Prof. Dae Ryoo Yang Model structure ime series Multivariable time series x [ ] x x xm Multidimensional time series (temporal+spatial)

More information

Estimation and Prediction Scenarios

Estimation and Prediction Scenarios Recursive BLUE BLUP and the Kalman filter: Estimation and Prediction Scenarios Amir Khodabandeh GNSS Research Centre, Curtin University of Technology, Perth, Australia IUGG 2011, Recursive 28 June BLUE-BLUP

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

On Input Design for System Identification

On Input Design for System Identification On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods

More information

Derivation of the Kalman Filter

Derivation of the Kalman Filter Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P

More information

INTRODUCTORY ECONOMETRICS

INTRODUCTORY ECONOMETRICS INTRODUCTORY ECONOMETRICS Lesson 2b Dr Javier Fernández etpfemaj@ehu.es Dpt. of Econometrics & Statistics UPV EHU c J Fernández (EA3-UPV/EHU), February 21, 2009 Introductory Econometrics - p. 1/192 GLRM:

More information

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with 5.2 Approximate Modelling What can be said about if S / M, and even G / G? G(q, ) H(q, ) f-domain expression for the limit model Combine: with ε(t, ) =H(q, ) [y(t) G(q, )u(t)] y(t) =G (q)u(t) v(t) We know

More information

Econ 510 B. Brown Spring 2014 Final Exam Answers

Econ 510 B. Brown Spring 2014 Final Exam Answers Econ 510 B. Brown Spring 2014 Final Exam Answers Answer five of the following questions. You must answer question 7. The question are weighted equally. You have 2.5 hours. You may use a calculator. Brevity

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

Pure Random process Pure Random Process or White Noise Process: is a random process {X t, t 0} which has: { σ 2 if k = 0 0 if k 0

Pure Random process Pure Random Process or White Noise Process: is a random process {X t, t 0} which has: { σ 2 if k = 0 0 if k 0 MODULE 9: STATIONARY PROCESSES 7 Lecture 2 Autoregressive Processes 1 Moving Average Process Pure Random process Pure Random Process or White Noise Process: is a random process X t, t 0} which has: E[X

More information

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012 Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.

More information

Final Exam. Economics 835: Econometrics. Fall 2010

Final Exam. Economics 835: Econometrics. Fall 2010 Final Exam Economics 835: Econometrics Fall 2010 Please answer the question I ask - no more and no less - and remember that the correct answer is often short and simple. 1 Some short questions a) For each

More information

Classic Time Series Analysis

Classic Time Series Analysis Classic Time Series Analysis Concepts and Definitions Let Y be a random number with PDF f Y t ~f,t Define t =E[Y t ] m(t) is known as the trend Define the autocovariance t, s =COV [Y t,y s ] =E[ Y t t

More information

COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 26, 2005, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTION

COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 26, 2005, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTION COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 26, 2005, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTION Answer all parts. Closed book, calculators allowed. It is important to show all working,

More information

6.435, System Identification

6.435, System Identification SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).

More information

1. Fundamental concepts

1. Fundamental concepts . Fundamental concepts A time series is a sequence of data points, measured typically at successive times spaced at uniform intervals. Time series are used in such fields as statistics, signal processing

More information

Contents. TAMS38 - Lecture 10 Response surface. Lecturer: Jolanta Pielaszkiewicz. Response surface 3. Response surface, cont. 4

Contents. TAMS38 - Lecture 10 Response surface. Lecturer: Jolanta Pielaszkiewicz. Response surface 3. Response surface, cont. 4 Contents TAMS38 - Lecture 10 Response surface Lecturer: Jolanta Pielaszkiewicz Matematisk statistik - Matematiska institutionen Linköpings universitet Look beneath the surface; let not the several quality

More information

5 Transfer function modelling

5 Transfer function modelling MSc Further Time Series Analysis 5 Transfer function modelling 5.1 The model Consider the construction of a model for a time series (Y t ) whose values are influenced by the earlier values of a series

More information

System Identification

System Identification System Identification Lecture : Statistical properties of parameter estimators, Instrumental variable methods Roy Smith 8--8. 8--8. Statistical basis for estimation methods Parametrised models: G Gp, zq,

More information

Empirical Macroeconomics

Empirical Macroeconomics Empirical Macroeconomics Francesco Franco Nova SBE April 21, 2015 Francesco Franco Empirical Macroeconomics 1/33 Growth and Fluctuations Supply and Demand Figure : US dynamics Francesco Franco Empirical

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

EXAM IN MODELING AND SIMULATION (TSRT62)

EXAM IN MODELING AND SIMULATION (TSRT62) EXAM IN MODELING AND SIMULATION (TSRT62) SAL: ISY:s datorsalar TID: Friday 5th January 2018, kl. 8.00 12.00 KURS: TSRT62 Modeling and Simulation PROVKOD: DAT1 INSTITUTION: ISY ANTAL UPPGIFTER: 5 ANTAL

More information

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification 1. Consider the time series x(k) = β 1 + β 2 k + w(k) where β 1 and β 2 are known constants

More information

P Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the second quarter of the year 2007

P Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the second quarter of the year 2007 P-07-163 Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the second quarter of the year 2007 Reynir Böðvarsson Uppsala University, Department of Earth Sciences July

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 ) Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y

More information

Empirical Macroeconomics

Empirical Macroeconomics Empirical Macroeconomics Francesco Franco Nova SBE April 5, 2016 Francesco Franco Empirical Macroeconomics 1/39 Growth and Fluctuations Supply and Demand Figure : US dynamics Francesco Franco Empirical

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

P Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the third quarter of the year 2006

P Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the third quarter of the year 2006 P-06-235 Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the third quarter of the year 2006 Reynir Böðvarsson Uppsala University, Department of Earth Sciences October

More information

Lecture 4 - Spectral Estimation

Lecture 4 - Spectral Estimation Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Lecture: Quadratic optimization

Lecture: Quadratic optimization Lecture: Quadratic optimization 1. Positive definite och semidefinite matrices 2. LDL T factorization 3. Quadratic optimization without constraints 4. Quadratic optimization with constraints 5. Least-squares

More information

An Example file... log.txt

An Example file... log.txt # ' ' Start of fie & %$ " 1 - : 5? ;., B - ( * * B - ( * * F I / 0. )- +, * ( ) 8 8 7 /. 6 )- +, 5 5 3 2( 7 7 +, 6 6 9( 3 5( ) 7-0 +, => - +< ( ) )- +, 7 / +, 5 9 (. 6 )- 0 * D>. C )- +, (A :, C 0 )- +,

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

1 Introduction to Generalized Least Squares

1 Introduction to Generalized Least Squares ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the

More information

LA PRISE DE CALAIS. çoys, çoys, har - dis. çoys, dis. tons, mantz, tons, Gas. c est. à ce. C est à ce. coup, c est à ce

LA PRISE DE CALAIS. çoys, çoys, har - dis. çoys, dis. tons, mantz, tons, Gas. c est. à ce. C est à ce. coup, c est à ce > ƒ? @ Z [ \ _ ' µ `. l 1 2 3 z Æ Ñ 6 = Ð l sl (~131 1606) rn % & +, l r s s, r 7 nr ss r r s s s, r s, r! " # $ s s ( ) r * s, / 0 s, r 4 r r 9;: < 10 r mnz, rz, r ns, 1 s ; j;k ns, q r s { } ~ l r mnz,

More information

Chapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe

Chapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe System Identification written by L. Ljung, Prentice Hall PTR, 1999 Chapter 6: Nonparametric Time- and Frequency-Domain Methods Problems presented by Uwe System Identification Problems Chapter 6 p. 1/33

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

PRINCIPAL COMPONENTS ANALYSIS

PRINCIPAL COMPONENTS ANALYSIS 121 CHAPTER 11 PRINCIPAL COMPONENTS ANALYSIS We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: Principal Components Analysis (PCA). PCA involves

More information

EXAM IN MODELING AND SIMULATION (TSRT62)

EXAM IN MODELING AND SIMULATION (TSRT62) EXAM IN MODELING AND SIMULATION (TSRT62) SAL: ISY:s datorsalar TID: Wednesday 4th January 2017, kl. 8.00 12.00 KURS: TSRT62 Modeling and Simulation PROVKOD: DAT1 INSTITUTION: ISY ANTAL UPPGIFTER: 5 ANTAL

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

ARMA Estimation Recipes

ARMA Estimation Recipes Econ. 1B D. McFadden, Fall 000 1. Preliminaries ARMA Estimation Recipes hese notes summarize procedures for estimating the lag coefficients in the stationary ARMA(p,q) model (1) y t = µ +a 1 (y t-1 -µ)

More information

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS M Gevers,1 L Mišković,2 D Bonvin A Karimi Center for Systems Engineering and Applied Mechanics (CESAME) Université Catholique de Louvain B-1348 Louvain-la-Neuve,

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Asymptotics Asymptotics Multiple Linear Regression: Assumptions Assumption MLR. (Linearity in parameters) Assumption MLR. (Random Sampling from the population) We have a random

More information

Instrumental Variables

Instrumental Variables Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the

More information

P Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the third quarter of the year 2009

P Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the third quarter of the year 2009 P-09-61 Swedish National Seismic Network (SNSN) A short report on recorded earthquakes during the third quarter of the year 2009 Reynir Böðvarsson Uppsala University, Department of Earth Sciences October

More information

12. Prediction Error Methods (PEM)

12. Prediction Error Methods (PEM) 12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

ECON3150/4150 Spring 2015

ECON3150/4150 Spring 2015 ECON3150/4150 Spring 2015 Lecture 3&4 - The linear regression model Siv-Elisabeth Skjelbred University of Oslo January 29, 2015 1 / 67 Chapter 4 in S&W Section 17.1 in S&W (extended OLS assumptions) 2

More information