Recall that the AR(p) model is defined by the equation

Similar documents
Comment about AR spectral estimation Usually an estimate is produced by computing the AR theoretical spectrum at (ˆφ, ˆσ 2 ). With our Monte Carlo

Linear Models A linear model is defined by the expression

Multivariate Time Series

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

MAT3379 (Winter 2016)

CSC 2541: Bayesian Methods for Machine Learning

STAT Financial Time Series

Parameter estimation: ACVF of AR processes

MCMC algorithms for fitting Bayesian models

Bayesian Regression Linear and Logistic Regression

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before.

Chapter 6: Model Specification for Time Series

Markov Chain Monte Carlo

Bayesian Networks in Educational Assessment

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Multivariate Normal & Wishart

Fundamental Probability and Statistics

ST 740: Markov Chain Monte Carlo

Markov Chain Monte Carlo methods

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

1 Data Arrays and Decompositions

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

STA 4273H: Statistical Machine Learning

data lam=36.9 lam=6.69 lam=4.18 lam=2.92 lam=2.21 time max wavelength modulus of max wavelength cycle

Bayesian Model Comparison:

Chapter 4: Models for Stationary Time Series

36-463/663Multilevel and Hierarchical Models

Bayesian Methods in Multilevel Regression

STA 4273H: Sta-s-cal Machine Learning

Multivariate Time Series: VAR(p) Processes and Models

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Midterm Suggested Solutions

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Brief introduction to Markov Chain Monte Carlo

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Supplementary Note on Bayesian analysis

Nonparametric Bayesian Methods (Gaussian Processes)

Bayesian Inference and MCMC

STAT 425: Introduction to Bayesian Analysis

Estimating AR/MA models

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

Monte Carlo in Bayesian Statistics

1. The Multivariate Classical Linear Regression Model

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Index. Regression Models for Time Series Analysis. Benjamin Kedem, Konstantinos Fokianos Copyright John Wiley & Sons, Inc. ISBN.

Introduction to Bayesian Inference

STA 214: Probability & Statistical Models

Bayesian Gaussian / Linear Models. Read Sections and 3.3 in the text by Bishop

Bayesian Inference for Normal Mean

Ch 6. Model Specification. Time Series Analysis

Advanced Statistical Methods. Lecture 6

[y i α βx i ] 2 (2) Q = i=1

Riemann Manifold Methods in Bayesian Statistics


Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models

Lecture 2: Univariate Time Series

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Assessing Regime Uncertainty Through Reversible Jump McMC

distributed approximately according to white noise. Likewise, for general ARMA(p,q), the residuals can be expressed as

Problem Selected Scores

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Bayesian Inference. Chapter 9. Linear models and regression

Machine Learning. Probabilistic KNN.

SAMSI Astrostatistics Tutorial. More Markov chain Monte Carlo & Demo of Mathematica software

Time Series I Time Domain Methods

Lecture Notes based on Koop (2003) Bayesian Econometrics

Introduction to Machine Learning CMU-10701

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

A Bayesian Approach to Phylogenetics

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

Bayes: All uncertainty is described using probability.

Default Priors and Effcient Posterior Computation in Bayesian

Stat 5101 Lecture Notes

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

1 Cricket chirps: an example

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Introduction to Time Series Analysis. Lecture 11.

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE)

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

INTRODUCTION TO BAYESIAN STATISTICS

STA414/2104 Statistical Methods for Machine Learning II

STA 414/2104, Spring 2014, Practice Problem Set #1

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS)

Computer intensive statistical methods

Using training sets and SVD to separate global 21-cm signal from foreground and instrument systematics

Foundations of Statistical Inference

Bayesian Linear Models

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9

Sampling Algorithms for Probabilistic Graphical models

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Next is material on matrix rank. Please see the handout

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

The linear model is the most fundamental of all serious statistical models encompassing:

AMS-207: Bayesian Statistics

Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference

Lecture : Probabilistic Machine Learning

Transcription:

Estimation of AR models Recall that the AR(p) model is defined by the equation X t = p φ j X t j + ɛ t j=1 where ɛ t are assumed independent and following a N(0, σ 2 ) distribution. Assume p is known and define φ = (φ 1, φ 2, φ 3,..., φ p ), the vector of model coefficients. Given the data x 1, x 2, x 3,..., x n, we want to estimate (φ,σ 2 ). 116

Method of moments Recall that Yule Walker s equations establish that, ρ k = φ 1 ρ k 1 + φ 2 ρ k 2 +... + φ p ρ k p If we write these equations for k = 1, 2,..., p, we obtain a system for φ 1, φ 2,..., φ p. We can solve this system with an estimator for the autocorrelation ρ k. For example, we could use the sample autocorrelation ˆρ k = r k and then solve the p equations for φ. This method is implemented in R using the function ar. # For example try a=ar(x,method= yule walker ) 117

list(a) a$ar # this gives the MOM estimate of ph # for your data Maximum likelihood estimation For MLE, first we need to find the likelihood function for the AR model. Since the AR process has a Markovian structure, the joint density of the data is given by the expression p(x 1, x 2,..., x n φ, σ 2 ) = p(x 1, x 2,..., x p φ, σ 2 ) n p(x t x t 1,..., x t p, φ, σ 2 ) t=p+1 118

Assume the first p observations (initial values) (x 1, x 2,..., x p ) are completely known. Then the model likelihood is defined by ignoring p(x 1, x 2,..., x p ) from the above expression, i.e. p(x 1, x 2,..., x n φ, σ 2 ) n t=p+1 p(x t x t 1,..., x t p, φ, σ 2 ) What is p(x t x t 1,..., x t p, φ, σ 2 )? From the AR model definition, x t can be seen as the response of a linear regression with regressors x t 1, x t 2, x t 3,..., x t p, then p(x t x t 1,..., x t p, φ, σ 2 ) = N(x t f i φ, σ 2 ) where f i = (x t 1, x t 2, x t 3,..., x t p ). 119

Define F to be a matrix with rows f i ; i = p+1... n and x = (x p+1, x p+2,..., x n ). The likelihood function of the AR model conditional on the initial values is a multivariate Normal of dimension n p with mean F φ and covariance σ 2 I (n p) (n p), i.e. N(x F φ, σ 2 I (n p) (n p) ). The Maximum Likelihood Estimator (MLE) for (φ, σ 2 ) is ˆφ = (F F ) 1 F x s 2 = R/(n p) where R = (x F φ) (x F φ). For an unbiased estimator of σ 2, we use s 2 1 = R/(n 2p) 120

The MLE is also given by the ar function available in R/Splus. a=ar(x,method= mle ) a$ar # AR coefficients a$var # AR variance All these results are valid if we ignore the uncertainty due to the initial values. The complete likelihood of the model considers the extra part, p(x 1, x 2,..., x p φ, σ 2 ), which is a complicated function of the parameters. For the complete likelihood we require numerical methods (Newton-Raphson) to obtain the MLE of the AR model. Bayesian analysis of AR(p) model 121

In the context of linear regression (initial values known), we can use a non-informative prior for the parameters, p(φ, σ 2 ) 1/σ 2 Using Bayes theorem, the posterior distribution for (φ, σ 2 ) is given by: φ conditional on σ 2 follows a multivariate Normal N(φ ˆφ, σ 2 (F F ) 1 ). The marginal distribution for σ 2 follows an Inverse Gamma posterior IG((n 2p)/2, R/2). The marginal distribution for φ follows multivariate t distribution with n 2p degrees of freedom and location parameter ˆφ. For posterior inference using the complete model 122

likelihood, we require numerical techniques such as Markov chain Monte Carlo (MCMC) methods. Inference on characteristic reciprocal roots For this, we need to find the solutions to the equation Φ(B) = 0 where Φ(B) is the characteristic polynomial of the AR process. We have close form expressions for this characteristic roots if p = 1, 2, but if p > 2 it becomes really hard to obtain the solutions. We can use the R/Splus function polyroot to find the roots. ph = c(2*0.95*cos(0.5),-0.95^2) ph 123

[1] 1.667407-0.902500 polyroot(c(1,-ph)) [1] 0.9237711+0.5046585i 0.9237711-0.5046585i 1/polyroot(c(1,-ph)) [1] 0.8337034-0.4554543i 0.8337034+0.4554543i # For modulus and frequencies Mod(1/polyroot(c(1,-ph))) [1] 0.95 0.95 Arg(1/polyroot(c(1,-ph))) [1] -0.5 0.5 2*pi/Arg(1/polyroot(c(1,-ph)) [1] -12.56637 12.56637 Given some estimate ˆφ, we can compute estimates ˆα 1, ˆα 2,..., ˆα p for the reciprocal roots. 124

Bayesian inference for roots. Mapping from φ to roots is mathematically intractable (unless p = 1, 2). In general, there is no close form expression to the posterior distribution of the α s, although we know (φ, σ 2 ) follow a Normal/Inverse Gamma posterior. We will rely on Monte Carlo simulation to study the posterior distribution for the α s. Monte Carlo simulation scheme: Simulate a value σ 2 from an IG((n 2p)/2, R/2) distribution. Simulate the vector of coefficients φ from N(φ ˆφ, σ 2 (F F ) 1 ) distribution. With the simulated value for φ solve the equation 125

Φ(B) = 0. This leads to one generate sample from the posterior distribution of the α s. Iterate until we collect M samples and summarize samples. This algorithm produces exact Monte Carlo samples of a posterior distribution. It does not require convergence monitoring or a burn-in period. To simulate a multivariate Normal distribution, we need to use Cholesky s decomposition (chol function in R). If z is a k-dimensional vector that follows a multivariate N(z m, V ), where m is the mean and V is the covariance matrix, this function chol allows us to find a matrix A such that V = AA. 126

To simulate a z vector, we generate y 1, y 2,..., y k iid N(0,1) random deviates and make z = m + Ay where y = (y 1, y 2,..., y k ) and A is the Cholesky s decomposition of V. If V is numerically close to a singular matrix, we could use the Singular Value Decomposition of V (svd) instead of Cholesky s decomposition. 127

Identification of AR roots The AR model is invariant for different labeling of the α s, since the characteristic polynomial φ(b) = p i=1 (1 α ib)x t. The values of the AR coefficients are φ are invariant to permutations of the sub-indices for the α s For identification, complex reciprocal roots are ordered by modulus (r s) or by frequencies (ω s) If we have C complex pairs of reciprocal roots ordered by modulus then α 1 = r 1 exp(±iω 1 ); α 2 = r 2 exp(±iω 2 );... ; α C = r C exp(±iω C ) with the condition, r 1 > r 2 > r 3... > r C. 128

Ordering the roots by frequencies means that ω 1 < ω 2 <... < ω C. For the case of real roots the natural thing to do, is to order them from the smallest to the largest. For R real roots, r 1 < r 2 < r 3... < r R Example EEG trace of 400 observations. The ACF/PACF of this series suggests the use of an AR model with p = 8, 9, or 10. Fitting an AR(10) using R/Splus (it could also been an AR(8) or AR(9) gives the following MOM estimator for the parameters are ˆφ =(0.27,0.03-0.16,-0.18,-0.14,-0.15,-0.23,-0.1,-0.05,-0.11) 129

and ˆσ 2 = 3808.58. The reciprocal roots denoted by (r i, ω i ) and associated to this ˆφ vector are: (.97,.48); (.8, 2.21); (.75, 2.86); (.75,.99); (.74, 1.48) No real reciprocal were obtained for this AR fit. The MLE (rounded to 2 digits) is ˆφ =(0.25,0.04,-0.17,- 0.17,-0.13,-0.17,-0.24,-0.11,-0.05,-0.11), and ˆσ 2 = 3657.47. The unbiased estimate for σ 2 is 3753.72. The MLE for each reciprocal root in terms of modulus 130

and frequency: (.97,.48); (.8, 2.22); (.77, 2.85); (.75, 1.47); (.74,.99) a=ar(eeg,aic=f,order=10,method="mle") ph=a$ar v=a$var.pred round(ph,2) ar1 ar2 ar3 ar4 ar5 ar6 ar7 ar8 0.25 0.03-0.16-0.17-0.13-0.17-0.24-0.11 ar9 ar10-0.05-0.11 round(v,2) [1] 3609.46 alpha=1/polyroot(c(1,-ph)) round(m<-mod(alpha),2) 131

[1] 0.97 0.80 0.80 0.97 0.74 0.76 0.75 0.74 0.75 0.76 round(w<-arg(alpha),2) [1] -0.48-2.22 2.22 0.48-1.00-2.85 1.47 1.00-1.47 2.85 #order by modulus m=m[w>0] w=w[w>0] rev(m[order(m)]) [1] 0.97 0.79 0.76 0.75 0.74 rev(w[order(m)]) [1] 0.48 2.22 2.85 1.47 0.99 132

We now show the results for a Bayesian Analysis of an AR(10) model and based on 5000 Monte Carlo samples. The graphs include: Histograms of posterior samples for φ coefficients and the variance σ 2 Histograms of posterior samples for the α s ordered by modulus. Histograms of posterior samples for the α s ordered by periods (or frequencies). The α s are shown in terms of the pairs (r i, ω i ) or (r i, λ i ); i = 1, 2,..., 5. 133

Posterior histograms for φ 1 φ 6 0 2 4 6 8 0 2 4 6 0.1 0.2 0.3 0.4 phi_ 1 0.1 0.0 0.1 0.2 phi_ 2 0 2 4 6 8 0 2 4 6 8 0.3 0.2 0.1 0.0 phi_ 3 0.4 0.3 0.2 0.1 0.0 phi_ 4 0 2 4 6 8 0 2 4 6 8 0.3 0.2 0.1 0.0 phi_ 5 0.3 0.2 0.1 0.0 phi_ 6 134

Posterior histograms for φ 7 φ 10 and σ 2 0 2 4 6 8 0 2 4 6 8 0.4 0.3 0.2 0.1 phi_ 7 0.3 0.2 0.1 0.0 phi_ 8 0 2 4 6 8 0 2 4 6 8 0.2 0.1 0.0 0.1 phi_ 9 0.3 0.2 0.1 0.0 phi_ 10 Histogram of sigma2 0.0000 0.0010 3000 3500 4000 4500 sigma^2 135

Posterior histograms ordered by modulus of (r i, ω i ), i = 1, 2, 3 0 10 20 30 40 0 10 20 30 40 0.94 0.96 0.98 1.00 modulus 1 0.46 0.48 0.50 0.52 frequency 1 0 2 4 6 8 10 0 1 2 3 4 0.70 0.75 0.80 0.85 0.90 modulus 2 1.0 1.5 2.0 2.5 3.0 frequency 2 0 2 4 6 8 10 0.0 0.5 1.0 1.5 0.60 0.65 0.70 0.75 0.80 0.85 0.90 modulus 3 1.0 1.5 2.0 2.5 3.0 frequency 3 136

Posterior histograms ordered by modulus of (r i, ω i ), i = 4, 5 0 2 4 6 8 0.0 0.5 1.0 1.5 2.0 0.3 0.4 0.5 0.6 0.7 0.8 0.9 modulus 4 1.0 1.5 2.0 2.5 3.0 frequency 4 0 1 2 3 4 5 6 7 0.0 0.5 1.0 1.5 2.0 2.5 0.2 0.4 0.6 0.8 modulus 5 1.0 1.5 2.0 2.5 3.0 frequency 5 137

Posterior histograms ordered by modulus of (ri, λi), i = 1, 2, 3 0.94 0.96 0.98 1.00 12.0 12.5 13.0 13.5 14.0 modulus 1 lambda 1 0.70 0.75 0.80 0.85 0.90 2 3 4 5 6 7 modulus 2 lambda 2 0.60 0.65 0.70 0.75 0.80 0.85 0.90 2 3 4 5 6 7 modulus 3 lambda 3 138 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 0 2 4 6 8 10 0.0 0.5 1.0 1.5 0 10 20 30 40 0.0 0.5 1.0 1.5

Posterior histograms ordered by modulus of (ri, λi), i = 4, 5 0.3 0.4 0.5 0.6 0.7 0.8 0.9 2 3 4 5 6 7 8 modulus 4 lambda 4 0.2 0.4 0.6 0.8 2 3 4 5 6 7 8 9 modulus 5 lambda 5 139 0 1 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8

Posterior histograms ordered by wavelength of (r i, ω i ), i = 1, 2, 3 0 10 20 30 40 0 10 20 30 40 0.94 0.96 0.98 1.00 modulus 1 0.46 0.48 0.50 0.52 frequency 1 0 2 4 6 8 0 1 2 3 4 5 6 0.3 0.4 0.5 0.6 0.7 0.8 0.9 modulus 2 0.7 0.8 0.9 1.0 1.1 1.2 1.3 frequency 2 0 2 4 6 0 1 2 3 4 5 6 0.2 0.4 0.6 0.8 modulus 3 1.2 1.4 1.6 1.8 2.0 2.2 frequency 3 140

Posterior histograms ordered by wavelength of (r i, ω i ), i = 4, 5 0 2 4 6 8 10 0 1 2 3 4 5 6 7 0.2 0.4 0.6 0.8 modulus 4 2.0 2.2 2.4 2.6 2.8 3.0 frequency 4 0 1 2 3 4 5 6 7 0 2 4 6 8 0.5 0.6 0.7 0.8 0.9 modulus 5 2.5 2.6 2.7 2.8 2.9 3.0 3.1 frequency 5 141

Posterior histograms ordered by wavelength of (ri, λi), i = 1, 2, 3 0.94 0.96 0.98 1.00 12.0 12.5 13.0 13.5 14.0 modulus 1 lambda 1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 5 6 7 8 modulus 2 lambda 2 0.2 0.4 0.6 0.8 3.0 3.5 4.0 4.5 5.0 modulus 3 lambda 3 142 0 2 4 6 0.0 0.5 1.0 1.5 2.0 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8 0 10 20 30 40 0.0 0.5 1.0 1.5

Posterior histograms ordered by wavelength of (r i, ω i ), i = 4, 5 0 2 4 6 8 10 0 1 2 3 4 5 0.2 0.4 0.6 0.8 modulus 4 2.2 2.4 2.6 2.8 3.0 3.2 3.4 lambda 4 0 1 2 3 4 5 6 7 0 2 4 6 8 10 0.5 0.6 0.7 0.8 0.9 modulus 5 2.0 2.1 2.2 2.3 2.4 2.5 lambda 5 143

Graphs based on residuals obtained at the MLE ˆφ 0 100 200 300 400 Time as.ts(r) 200 0 100 Histogram of R 200 100 0 100 R 3 2 1 0 1 2 3 Sample Quantiles 200 0 100 ACF 0.0 0.4 0.8 Normal Q Q Plot Series R 0 10 20 30 40 50 Theoretical Quantiles Lag 0 10 20 30 40 50 Lag Partial ACF 0.10 0.00 0.10 log=10 pgram 0 5000 15000 Series R Raw periodogram 0.0 0.5 1.0 1.5 2.0 2.5 3.0 2 * pi * per$freq 144 0.000 0.004 0.008

AR(10) spectrum computed at the MLE AR(10) spectrum spectrum 0 50000 100000 150000 0.0 0.5 1.0 1.5 2.0 2.5 3.0 omega 145

Computing the approximate test for white noise for the model residuals gives T = 17138.13 and a p-value of.839543. The p-value is large so we don t have any evidence against the null hypothesis of the residuals following a white noise process. Using this Bayesian approach, we make statements about this process following a stationary series or not. For our 5000 samples, we can count have many of this samples produce a characteristic reciprocal root with a modulus greater than one. The relative frequency of the event modulus greater than one gives us an estimate of the posterior 146

probability of this EEG series comes from a non-stationary process. For these 5000 samples, the posterior probability is 0.026. In a similar way, we can compute the posterior probability of having at least one real root under the AR(10) process (0.0032). 147

Code for Bayesian AR estimation eeg=scan("eeg") # Building F matrix for AR(10) n <- length(eeg) x <- eeg[11:n] f1 <- eeg[10:(n-1)]; f2 <- eeg[9:(n-2)] f3 <- eeg[8:(n-3)]; f4 <- eeg[7:(n-4)] f5 <- eeg[6:(n-5)]; f6 <- eeg[5:(n-6)] f7 <- eeg[4:(n-7)]; f8 <- eeg[3:(n-8)] f9 <- eeg[2:(n-9)]; f10 <-eeg[1:(n-10)] # Fdes <- cbind(f1,f2,f3,f4,f5,f6,f7,f8,f9,f10) 148

# F^tF mle and R FF.t<- t(fdes)%*%fdes FF.inv <-solve(t(fdes)%*%fdes) phhat <- FF.inv%*%t(Fdes)%*%x R <- (x - (Fdes)%*%phhat) s <- (ssq=sum (R*R))/(n-10) # Compute Cholesky decomposition FF.inv <- 0.5*(FF.inv + t(ff.inv)) FF.inv.cd <- t(chol(ff.inv)) # ar 10 function that draws from the # posterior for (phi,sigma^2) 149

ar10 <- function (m) { phhatmat <- matrix(phhat,10,m) v <- rchisq(m,(n-20)) v <-ssq/v # generation of ph norbi <- matrix(rnorm(10*m),ncol=m) normv <- t(sqrt(v)*t(norbi)) ph <- (FF.inv.cd%*%normv) + phhatmat #result ph <- t(ph) return(ph,v) } phlist=ar10(5000) 150

phsim=phlist$ph sigma2=phlist$v # Compute roots for draws of coefficients coef=cbind(rep(1,5000),-phsim) roots=1/t(apply(coef,1,polyroot)) mod=mod(roots) om=arg(roots) # # Ordering by modulus lam=2*pi/om lam[om<=1e-9 om>=pi-1e-9]=0 om[om<=1e-9 om >=pi-1e-9]=0 mod[om<=1e-9 om >=pi-1e-9]=0 151

# lm=matrix(na,nrow=dim(lam)[1],ncol=dim(lam)[2]) md=matrix(na,nrow=dim(mod)[1],ncol=dim(mod)[2]) w=matrix(na,nrow=dim(om)[1],ncol=dim(mod)[2]) # for(i in 1:5000){ind=order(mod[i,]); w[i,]=om[i,ind]; lm[i,]=lam[i,ind]; md[i,]=mod[i,ind]} # Some residual analysis ts.plot(as.ts(r)) hist(r,nclass=30,prob=t,density=-1) qqnorm(r) qqline(r) 152