STOR 356: Summary Course Notes

Size: px
Start display at page:

Download "STOR 356: Summary Course Notes"

Transcription

1 STOR 356: Summary Course Notes Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, NC February 19, 008 Course text: Introduction to Time Series and Forecasting by Peter J. Brockwell and Richard A. Davis (Springer Verlag, Second Edition, 003, ISBN ) 1 Introduction to Time Series (Chapter 1) 1.1 Basic definitions Notation: X t, t = 0, ±1, ±,..., will refer to a sequence of random variables that define a time series, x t, t = 0, ±1, ±,..., the actual observations. Usually we see a finite sequence x 1,..., x n. To define the joint distributions of a time series, we need to evaluate all quantities of the form Pr{X 1 x 1,..., X n x n }. In most cases, it s not realistic to specify that level of detail so we work with second-order properties, i.e. the means, variances and covariance of X t. The mean is usually written E(X t ) or µ X (t), and the covariances are Cov(X t, X t+h ) for any t, h. Recall the basic definition Cov(X, Y ) = E(XY ) E(X)E(Y ). We state a few basic facts about means and covariances. Suppose X 1,..., X n, Y 1,..., Y n are random variables and a 0, a 1,..., a n, b 0, b 1,..., b n are constants. Then E(a 0 + a 1 X a n X n ) = a 0 + a 1 E(X 1 ) a n E(X n ), (1) Cov(a 0 + a 1 X a n X n, b 0 + b 1 Y b n Y n ) = n n a i b j Cov(X i, Y j ). () Trivial examples: i=1 j=1 1. Independent, identically distributed (IID) noise, e.g. X t N(0, σ ) for each t. Binary random variables the name given to any time series where the outcomes may take just two values. For example, the results of a sequence of UNC basketball games with X t = 1 if UNC wins and X t = 0 if UNC loses. (Question: would these in practice be treated as independent?) 1

2 3. A random walk defined by S n = X X n where X 1, X,..., are IID. 1. Trend and seasonality Typically write X t = m t + s t + Y t (3) where m t is the trend, s t is the seasonal variation, and Y t has mean 0. Of course, not every time series has both trend and seasonal variation, but most series do have trends, and many have seasonal variation as well. One example is a polynomial trend: m t = a 0 + a 1 t a p t p (4) which is a pth order polynomial regression. The ITSM package contains an option to fit this automatically; of course, since this is just a simple linear regression, you could also use SAS or any regression package. [Examples: U.S. population data, Lake Huron data.] However in many cases the residuals are autocorrelated, which means the Y t in equation (3) are not independent but instead we can measure the correlations of the Y t (called autocorrelated because the Y t s are correlated with each other). Autocorrelation is both good and bad Bad because it makes the standard interpretation of regression more complicated (for example, one cannot rely on the standard errors that are computed by a standard linear regression package, because the usual formula for computing those standard errors assumes the residuals are independent) Good because we can exploit the autocorrelations to do forecasting! This is the theme of much of this course. Harmonic regression is the alternative to equation (4) when we have seasonal data: s t = J a 0 + (a j cos λ j t + b j sin λ j t) (5) j=1 where λ j is the frequency of the j th seasonal component. Often, we are dealing with a time series of known period d, e.g. monthly time series very often show an annual cycle so d = 1 in that instance. In that case, it is usual to write λ j = πj d (known as the j th harmonic), and continue adding more harmonic terms (i.e. increasing the value of J) until we have enough to represent the seasonal component faithfully. 1.3 General strategy for time series analysis Not every time series analysis uses the following recipe, but most do Plot the data.

3 . Look for features, e.g. trend, seasonality, sharp changes in behavior, outliers. 3. Remove trend and seasonal component to get stationary residuals (defined later). To get to this point, we may need to transform the original data (e.g. take logarithms, difference the data by defining transformations of the form Y t = X t X t h, etc.). 4. Choose a model for the residuals. Much of this course will consider the most common of statistical time series models, the autoregressive moving average process, abbreviated ARMA. 5. Forecast the residuals and report the results. There s also an alternative approach known as spectral analysis which is covered in Chapter 4 of the course text. However, we won t be covering that in this course. 1.4 More formal definitions Suppose X t is a time series with E(X t ) < for each t. This means, in particular, that for each t, the mean and variance of X t are both well-defined and finite, and hence, all the covariances of X s, X t, for general s and t, are well-defined and finite as well. Mean: µ X (t) = E(X t ) Covariance: γ X (r, s) = E{(X r µ X (r))(x s µ X (s))} The series is weakly stationary if 1. µ X (t) is independent of t. γ X (t, t + h) is independent of t for each h. A series is strongly stationary (or strictly stationary) if Pr{X 1 x 1,..., X n x n } = Pr{X h+1 x 1,..., X h+n x n } (6) for every possible choice of h, n, x 1,..., x n. However, in most practical cases it s too hard to check a condition like (6), so we only assume weakly stationary. For stationary time series, the autocovariance function (ACVF) is γ X (h) = Cov(X t+h, X t ), h = 0, ±1, ±,... The autocorrelation function or ACF is ρ X (h) = γ X(h) γ X (0). Note about this definition. The usual definition of the correlation of two random variables Y and Z is Corr(Y, Z) = Cov(Y, Z) V ar(y ) V ar(z). However, in this case, if we identify Y with X t+h and Z with X t, we find Cov(Y, Z) = γ X (h), while V ar(y ) = V ar(z) = γ X (0). So in this case, V ar(y ) V ar(z) = γ X (0), hence the result. Examples: 3

4 1. IID process: we say X t IID(0, σ ) if the values of X t are independent with a common distribution that has mean 0 and variance σ (so N(0, 1), in other words a normal distribution with mean 0 and variance σ, would be a special case, but in general we don t require a normal distribution). In that case γ X (h) = { σ if h = 0 0 if h 0 (7). White noise: we say X t W N(0, σ ) if the variables are uncorrelated (in other words, Cov(X s, X t ) = 0 except when s = t). The ACVF is again given by (7), but there are cases when random variables are uncorrelated without being independent [for an example, see Problem 1.8 of the text], so this is not the same as the previous case. 3. For a random walk we have, for t > 0 and h > 0, Cov(S t, S t+h ) = σ t (8) which is not independent of t. So this is an example of a series which is not stationary. [Proof of (8): use (). We have Cov(X X t, X X t+h ) = t i=1 t+h j=1 Cov(X i, X j ) = ti=1 Cov(X i, X i ) = tσ since Cov(X i, X j ) = 0 whenever i j.] 4. The moving average process of order 1, abbreviated MA(1) and given by X t = Z t + θz t 1 where Z t W N(0, σ ). In this case, σ (1 + θ ) if h = 0 γ X (h) = σ θ if h = ±1. (9) 0 if h > 1 The series is therefore stationary. [(9 can be proved using ().] The ACF is 1 if h = 0 θ ρ X (h) = if h = ±1 1+θ. (10) 0 if h > 1 5. The autoregressive process of order 1, abbreviated AR(1) and given by X t = φx t 1 + Z t where Z t W N(0, σ ). In this case if we write, for h > 0, γ X (h) = Cov(X t, X t+h ) = Cov(X t, φx t+h 1 + Z t+h ) = ργ X (h 1), which is true because Z t+h is not correlated with X s, s t (a consequence of causality). It follows by induction that γ X (h) = φ h γ X (0), h 0. 4

5 We therefore have ρ X (h) = φ h. (11) [Note we always have ρ X ( h) = ρ X (h), so (11) applies to h < 0 as well as h 0.] We also have from which we deduce γ X (0) = V ar(x t ) = V ar(φx t 1 ) + V ar(z t ) = φ γ X (0) + σ γ X (0) = σ 1 φ (1) provided φ < 1. As we shall see later, the AR(1) process does not define a causal process when φ Sample ACVF and ACF Assume observed data x 1,..., x n. First define x = 1 n ni=1 x i. The sample ACVF at lag h is defined for n < h < n. The sample ACF is ˆγ(h) = 1 n n h t=1 (x t+ h x)(x t x) ˆρ(h) = ˆγ(h) ˆγ(0) Comment. We always divide by n, not n h, as might seem more logical since the sum is over n h terms. The reason for this is to ensure that the sample ACVF and ACF are non-negative definite (to be defined in Chapter ). Some ( distribution ) theory: if the data are IID N(0, 1) then ˆρ(h) has approximately the distribution N 0, 1 n for all h 0. This means that if we draw boundaries at ± 1.96 n (or more simply ± n ) these will be approximate 95% confidence bands, on the assumption that the true distribution is white noise (things are more complicated if the time series is not white noise; see Bartlett s formula in Chapter ). In ITSM, if you select Statistics, then ACF/PACF, then Sample, the plot of the Sample ACF appears, together with confidence bands: these confidence bands are calculated by the ± 1.96 n formula. [PACF stands for Partial Autocorrelation Function, which we haven t defined yet, but we will be seeing it later in the course.] Remark. If the ACF only shows a very slow decrease as h increases, that is often taken as an indication that the series is non-stationary, and therefore, that some other operation (such as removing a polynomial trend, or differencing) should be taken before doing a formal time series analysis. 5

6 1.6 Example: Lake Huron data [Note: This is based on an example in the text, but I ve independently worked it out here just to emphasize the key points in more detail.] Figure 1(a) shows the levels of Lake Huron in each year, plotted against year. Also shown on the plot is the least-squares regression line exactly as you might have obtained in STOR 355. The results of the analysis are shown in Table 1. In particular, this table shows that the slope of the linear regression has an estimate of 0.04 and a standard error of 0.004, which is very highly significant as judged by the usual t statistics. Figure 1. (a) Plot of raw data, with regression line shown formed by regression of annual value on time. (b) Residuals from the regression in (a): residual for year t plotted against residual in year t 1. The straight line is the result of a linear regression model fitted to these residuals. Intercept < Slope Table 1. Ordinary Least Squares regression fitted to the data in Figure 1(a): Regression of Level of Lake (y variable) on Year (x variable). However, let s go into a little more detail about the residuals from this regression. From now on, write y t for the residual at time t from this regression. We can plot y t against y t 1 (Figure 1(b)), which looks an approximate straight line with slope.791. From this we deduce the (approximate) model Y t =.791Y t 1 + Z t (AR(1) model). However, it is also possible to consider higher-order autoregressive models, such as the AR() model Y t = φ 1 Y t 1 + φ Y t + Z t. 6

7 If we continue the regression as far as the φ p Y t p term, this is called an autoregressive model of order p, abbreviated AR(p). As an example, Table shows the results for the cases p = 1,, 3, fitted by the usual least-squares regression methods (which you could check for yourself in SAS). We can see that in regression (b) both the residuals for year t 1 and t are statistically significant, but in (c), the residual for day t 3 is not significant. Therefore, we tentatively conclude that the correct model is AR(). (a) Simple Linear Regression: Estimate St. Error. t statistic p-value Intercept Residual Year t < (b) -Year Regression: Estimate St. Error. t statistic p-value Intercept Residual Year t < Residual Year t (c) 3-Year Regression: Estimate St. Error. t statistic p-value Intercept Residual Year t < Residual Year t Residual Year t Table. Regression model fitted to residuals from OLS regression in Table 1. The table shows, (a) Residual for year t regressed on residual for year t 1, (b) Residual for year t regressed on residuals for years t 1 and t, (c) Residual for year t regressed on residuals for years t 1, t and t 3. Where we re going with this example. Although a full treatment of this example cannot be completed until later in the course, let me explain where we re going with it. It turns out that the fact that the residuals are autocorrelated is of very great importance in interpreting the first regression we did, i.e. the regression of level of lake against year. It turns out, using the more sophisticated AICC technique that we will learn later in the course, that the AR() model is indeed the optimal time series model for the residuals. However, when we refit the initial linear regression taking the autocorrelation into account (ITSM provides a way to do this), the slope of the linear regression becomes with a standard error of (instead of slope 0.04, standard error as in Table 1). In other words, the slope is only very slightly changed but the standard error is approximately doubled. In this case, this doesn t change the result that the trend is statistically significant, but if it was a less clear-cut trend, it very well could change that conclusion. This kind of analysis is critical in all sorts of environmental change problems (including looking at temperature datasets for evidence of global warming), because if we fitted a linear regression without taking account of autocorrelation, we could very well get the wrong answer. 1.7 Estimation of trend and seasonal components Always start by plotting data. 7

8 The general model is X t = m t + s t + Y t where m t is trend, s t is seasonal component and Y t is a stationary noise process (but not necessarily white noise, which would mean uncorrelated). Two basic approaches to removing trend: (a) regression, (b) differencing Models with trend but no seasonality X t = m t + Y t (a) Smoothing with a moving average (MA) filter W t = Treat W t as an approximation to m t. 1 q + 1 q j= q X t j. [Problem: what to do at endpoints. ITSM defines X t = X 1 for all t 0, X t = X n for all t > n, for purpose of computing MA filter.] Think of this as a special case of the general filter W t = a j X t j j= in which a j = 1 q+1, q j q, otherwise a j = 0. This is a low-pass filter on the sense that it filters out the high-frequency variation, retaining the slow long-term trend. There are more complicated filters, e.g. Spencer s 15-point filter, [a 0, a 1,..., a 7 ] = 1 [74, 67, 46, 1, 3, 5, 6, 3] 30 with a j = 0, j 8 and a j = a j, j > 0. As an exercise, check the following: if m t = c 0 + c 1 t + c t + c 3 t 3 then 7 j= 7 a j m t j = m t. Discussion. Obviously, the simple equally-weighted moving average over q + 1 data points is the simplest one to handle. The advantage of something more complicated like Spencer s 15-point rule is that it preserves cubic trends, e.g. if X t = m t +Y t with m t a cubic polynomial, then the filter preserves m t exactly but almost filters out the noise Y t. However it s better than fitting a cubic regression because the latter assumes the same cubic polynomial fits over the whole range of the data, whereas Spencer only assumes the cubic polynomial fits over a 15-point window. In other words, Spencer s filter will be a good procedure if the trend locally looks like a cubic polynomial even though this may not be true globally. Can be implemented in ITSM: choose Smooth from the menu and then Moving Average. ITSM allows you to specify your own weights. 8

9 (b) Exponential smoothing Defined by Alternatively and equivalently, ˆm 1 = X 1, ˆm t = αx t + (1 α) ˆm t 1, t > 1. ˆm t = t α(1 α) j X t j + (1 α) t 1 X 1. The idea is to put the highest weight on the most recent observations. Unlike, say, moving average weighting, it is a real-time smoother (the calculation of ˆm t depends only on observations up to time t). Here α (0, 1) is an adjustable parameter which may be chosen visually to give what appears to be the right degree of smoothing (this option is available in ITSM). (c) Smoothing by elimination of high-frequency components. The idea is to use FFT (Fast Fourier Transform) to transform the observed series to a sum of sine/cosine terms, set all the highest-frequency terms to 0, then transform back to get something like the original series but with all the high-frequency components removed. The proportion of frequencies removed is another adjustable parameter (like α in exponential smoothing) and often has similar effect. See ITSM for implementation and examples. (d) Polynomial fitting Fit the polynomial m t = a 0 + a 1 t a p t p by ordinary least squares regression. (e) Differencing Define BX t = X t 1. B is often called the backshift operator and is very commonly used in time series analysis. Next, let = 1 B be the difference operator. So X t = X t X t 1, X t = ( X t ) = X t X t 1 + X t, etc. In practice, often apply once and examine plots of data, ACF/PACF, etc. to see whether series looks stationary. If not, apply again and re-examine. In principle, we could apply the difference operator as many times as we like, though in practice it is rare to go beyond. 9

10 1.7. Trend and seasonality Now consider the more general case where X t = m t + s t + Y t with s t a seasonal component. In practice, the period d of the seasonal component is almost always known and we assume that here. (a) Estimation by moving averages. (i) Trend by moving average filter of period d. If d = q say, we derive a centered moving average by downweighting the endpoints: ˆm t = 0.5X t q + X t q X t+q X t+q. d If d odd, just use usual moving average over X t q,..., X t+q where q + 1 = d. (ii) Estimating the seasonal component: For each k = 1,..., d, let w k be the average of all deviations (x k+jd ˆm k+jd ) over q < k + jd < n q. [For example, if these are monthly data, w k is the average over all deviations for the kth month, computed over the window for which ˆm t is well defined.] Then let (b) Differencing ŝ k = w k 1 d w i, 1 k d, ( to ensure average of s k is 0) d i=1 ŝ k = ŝ k d, k > d. We then define deseasonalized data Y t = X t ŝ t. Finally, analyze Y t for decomposition into trend and stationary residual. Recall operators B and from earlier. Now extend to seasonal case by defining d X t = (1 B d )X t = X t X t d. In practice: often apply d first to remove seasonal component, then examine series for stationarity. May need to apply d a second time (this is rare), or to apply once or more to remove long-term trend. It s quite common to apply a combination such as Y t = d X t (order of and d does not matter). (c) Regression: use a combination of harmonic regression and polynomial regression to estimate/remove both seasonal and long-term trend components. 10

11 1.8 Checking residuals The idea: we have detrended the data, now look to see if the residuals are consistent with white noise. (a) Sample ACF. Plot and compare with 1.96 n confidence bands. If (almost) all values of ACF lie within those bands (except for lag 0 where by definition the ACF is 1), conclude series is white noise. (b) Portmanteau test: idea to improve on simple ACF test by combining all lags of ACF into a single test statistic. The original idea was to define Q = h n ˆρ (j) j=1 for some suitable h (not too large compared with n, the total length of the series typically h < n/4). Under H 0, the null hypothesis that the series is white noise, we have Q χ h approximately (the chi-squared distribution with h degrees of freedom). Therefore, reject H 0 at level α if Q > χ h (1 α) (the 1 α probability point of the χ h distribution). This can be looked up using the tables of the chi-squared distribution that are at the back of any elementary text on statistics: select degrees of freedom (DF) equal to h, choose the desired tail probability α (e.g. 0.05, 0.01 are common choices), then the table gives the rejection point χ h (1 α). A more sophisticated version is the Ljung-Box test: Q LB = n(n + ) h j=1 ˆρ (j) n j. Q LB is used exactly the same way as Q, but is supposed to give a better approximation to the χ h distribution. Other ideas (read in text:) (c) McLeod-Li test (like Ljung-Box, but applied to squares of original series) (d) Turning point test (e) Difference-sign test (f) Rank test (g) Fitting an AR model (h) Checking for normality via a QQ plot. 11

12 Stationary Processes For forecasting in time, we need that the series has some properties that do not vary with time. Stationarity is the most useful assumption that makes forecasting possible..1 Basic properties Recall the definitions γ(h) = Cov(X t+h, X t ), ρ(h) = γ(h) γ(0). Elementary properties: (i) γ(0) > 0. (ii) γ(h) < γ(0) for all h (because γ(h)/γ(0) = ρ(h) and this is necessarily in the interval ( 1, 1)). (iii) γ(h) = γ( h) for all h. Definition. If κ is a real-valued function defined on the integers (in other words, κ(h) is a welldefined real number for each h = 0, ±1, ±,..., we say κ is non-negative definite if, for all n 1, for all a 1,..., a n, n n a i a j κ(i j) 0. i=1 j=1 Theorem. κ is the ACVF of some stationary process if and only if κ is an even function (i.e. κ(h) = κ( h) for all h) and κ is non-negative definite. We won t prove the whole theorem, but the only if half goes like this: if κ = γ for γ the ACVF of a process X, then it is necessarily an even function and ( n n n ) a i a j κ(i j) = V ar a i X i 0. i=1 i=1 j=1 Example. Consider κ(h) = 1, if h = 0, ρ, if h = ±1, 0, otherwise. We claim that this is the ACVF of a stationary process if and only if ρ 1. If ρ 1 θ, we solve the equation = ρ and verify that this has a real root in θ < 1: 1+θ therefore, the process is MA(1) with this θ. Conversely, suppose ρ > 1. Consider the sequence a 1 = 1, a = 1, a 3 = 1, etc. Then n n a i a j κ(i j) = n (n 1)ρ 0. i=1 j=1 1

13 Therefore ρ contradiction. n (n 1) and since this inequality must be true for all n > 1, ρ 1. This is a For ρ < 1, we define a 1 = a = a 3 =... = 1 and derive a similar contradiction. The way this result is used is as follows. Suppose we were observing a time series in which the sample ACF at lag 1 is in the range ( 1, 1 ), and all the higher-order ACFs were close to 0. Then it might be reasonable to assume the ACF s are exactly 0 for h > 1, treat the process as θ MA(1), and solve = ρ 1+θ X (1) to get an estimate of θ. However, if ρ X (1) > 1 or ρ X(1) < 1, we know right away that this idea cannot succeed, because we would be trying to fit an ACF that violates the non-negative-definite condition. Therefore, in such a case, we would be forced to find an alternative time series model. The MA(q) process is defined by X t = Z t + θ 1 Z t θ q Z t q where Z t W N(0, σ ). This is q-correlated in the sense that γ(h) = 0, h > q. Conversely, any q-correlated covariance function is the covariance of some MA(q) process.. Linear processes X t is linear if it has a representation X t = j= ψ j Z t j (13) where Z t W N(0, σ ) and j ψ j <. Notation: X = ψ(b)z where ψ(b) = j= ψ j B j. If ψ j = 0 for all j < 0, we call the process causal. Although non-causal processes are welldefined mathematically, they are of very little practical interest, because for forecasting purposes, that would imply that we need to know future values of Z s, s > t in order to forecast a particular X t. Therefore, in practice, we almost always restrict ourselves to causal processes. Note: The condition ψ j < is a stronger condition than ψj purposes, the latter condition is sufficient (or all that we have). <, though for some Proposition..1. Let Y t be any stationary process and set X t = ψ j Y t j where ψ j <. Then X t is stationary with covariance function γ X (h) = In particular, if Y t is white noise, j= k= γ X (h) = j= ψ j ψ k γ Y (h + k j). (14) ψ j ψ j h σ. (15) 13

14 .3 The AR(1) process The stationary AR(1) process X t φx t 1 = Z t is equivalent to X t = φ j Z t j (16) provided φ < 1. Proof of (16). First recall an earlier result from (1), that if φ < 1, the stationary AR(1) σ process has variance. For now, the only part of this result that we shall use is that the 1 φ variance is finite. We write X t = Z t + φx t 1 = Z t + φz t 1 + φ Z t. = Z t + φz t 1 + φ Z t +... φ M Z t M + φ M+1 X t M 1 for any M > 1. Now let M. Then M E X t φ j Z t j ( ) = φ M+ E Xt M 1 0 (17) where the last statement obviously uses the fact that φ < 1. But (17) is the definition of convergence of an infinite sum of random variables, in other words, it s equivalent to (16). [Note here: Strictly speaking this is only one form of convergence of random variables, known as convergence in mean square or L convergence. However we won t be making distinctions like that anywhere in this course.] For φ > 1, a theoretical solution in the form (13) exists (see p. 54 of text) but is not causal, so this result is of little use in practice. The condition φ < 1 is known as the causality condition or sometimes more simply the stationarity condition..4 The MA(1) process The MA(1) process is defined by where Z t W N(0, σ ). X t = Z t + θz t 1 (18) At first sight there is no need for any restriction on θ, because the process (18) cannot blow up (unlike AR(1) with φ > 1) and the process is well-defined for any θ. Nevertheless there are two issues: (a) identifiability, (b) invertibility, that we shall discuss here. 14

15 (a) Identifiability. Recall that γ X (0) = σ (1 + θ ), γ X (1) = σ θ by (9). Therefore ρ X (1) = θ. Suppose we 1+θ 1/θ replace θ by 1/θ. Then ρ X (1) becomes = θ in other words, ρ 1+1/θ 1+θ X (1) is unchanged by the transformation θ 1/θ. But ρ X (1) is the only parameter by which we can learn anything of the dependence structure of the time series recall that ρ X (h) = 0 for all h > 1. Therefore, the process in which θ is replaced by 1/θ in (18) is identical to (18) itself the process is unidentifiable in the sense that one cannot distinguish these two cases. However, a satisfactory way to resolve this problem is to restrict attention to θ such that θ 1. This is therefore called the identifiability condition. (b) Invertibility For an MA(1) or indeed any MA(q) process, the existence of the representation (13) is not in doubt just set ψ 0 = 1, ψ j = θ j for 1 j q, ψ j = 0 for j > q. However, for many purposes in particular, both forecasting and estimation it is very useful to have available an inverse representation, of the form for suitable constants π j, 0 j <. In the case of MA(1), we write Z t = X t θz t 1 = X t θx t 1 + θ Z t. Z t = π j X t j (19) = X t θx t 1 + θ X t... + ( θ) M X t M + ( θ) M+1 Z t M 1 for any M > 1. Now let M : by the same arguments as used for AR(1), we have which is convergent if and only if θ < 1. Z t = ( θ) j X t j (0) Therefore, provided θ < 1, we have the existence of an inverse representation (19) with π j = ( θ) j, j 0. However, if θ 1, no such representation exists. The condition θ < 1 is known as the invertibility condition for MA(1). Since this includes the earlier identifiability condition, we generally restrict the class of MA(1) processes to those for which θ < 1, because in that case, the process is both identifiable and invertible. However, it will also turn out that there are some cases where this condition is not so easy to apply in practice. With some series, no matter how hard we try, we keep coming back to the model X t = Z t Z t 1 (MA(1) with θ = 1), which is equivalent to a random walk model. This is an example of the unit root problem, which is very common with economic time series. Later in the course, we shall discuss ways to identify the unit root problem and what to do in such cases. 15

16 .5 ARMA(1,1) Now consider the ARMA(1,1) process where Z t W N(0, σ ) as usual. Then and so on. X t φx t 1 = Z t + θz t 1 (1) X t = Z t + θz t 1 + φx t 1 = Z t + θz t 1 + φ(z t 1 + θz t + φx t ) = Z t + (θ + φ)z t 1 + φθz t + φ (Z t + θz t 3 + φx t 3 ) () We could continue along these lines to get a general expansion for X t in terms of Z s, s t, but it is quicker and more intuitive to proceed by the following formal argument. as Rewrite (1) in the form (1 φb)x t = (1 + θb)z t with B the backshift operator. Rewrite this X t = 1 + θb 1 φb Z t Now let s manipulate 1+θB 1 φb as if B were a regular algebraic variable. In particular, expand 1 1 φb as the geometric series φ j B j. This leads to provided φ < 1. X t = (1 + θb)(1 + φb + φ B + φ 3 B )Z t { } = 1 + (θ + φ)b + (θ + φ)φb + (θ + φ)φ B Z t = Z t + (θ + φ) φ j Z t j (3) j=1 Note, however, that the argument leading to (3) is only a shorthand way or writing () the rigorous proof is still to take () as far as the term in Z t M, then show that the mean squared error of the remainder terms tends to 0 as M, exactly as we did for AR(1). The conclusion, however, is this: the ARMA(1,1) process (1) has a representation (13), with provided φ < 1. ψ j = 0 if j < 0, 1 if j = 0, (θ + φ)φ j 1 if j 1, (4) 16

17 Similarly, we can also write which is of the form (19) with provided θ < 1. Z t = 1 φb 1 + θb X t = (1 φb) ( θ) j B j X t = (1 φb)(1 θb + θ B θ 3 B )X t { } = 1 (θ + φ)b + (θ + φ)θb (θ + φ)θ B π j = { 1 if j = 0, (θ + φ)( θ) j 1 if j 1, However, there is one further condition. Neither (4) nor (5) makes sense if θ + φ = 0: in that case the process (1) is equivalent to X t = Z t, and the parameter θ = φ is not identifiable. Therefore, it is usual to require that θ + φ 0 In summary, the conditions required for an ARMA(1,1) process are (a) Condition for a causal stationary process: φ < 1. (b) Condition for an invertible process: θ < 1. (c) Identifiability condition: θ + φ 0..6 Estimation of sample mean and ACF Assume X t stationary, mean E(X t ) = µ. E are given observations X 1,..., X n. The sample mean, Xn = 1 n ni=1, is the obvious point estimator of µ, but what about an interval estimator? The key step is to find a formula for the variance of Xn. We have V ar( X n ) = 1 n n n Cov(X i, X j ) i=1 j=1 = 1 n n n i i=1 h=1 i = 1 n 1 n = 1 n 1 n = 1 n Cov(X i, X i+h ) i=min(n,n h) h= n+1 i=max(1 h,1) h= n+1 n 1 h= n+1 γ X (h) X t {1 + min(n, n h) max(1 h, 1)} γ X (h) n h γ X (h). n 17 (5)

18 For n large, we have n h n 1 for any fixed finite h, and the range of h extends towards to, so the formula becomes V ar( X n ) 1 γ X (h). n This is summarized in the following Proposition: E( X n ) = µ, V ar( X n ) ν n where Also, the sampling distribution of Xn µ ν/n stationary time series). ν = h= h= γ X (h). (6) is approximately N(0, 1) (the central limit theorem for Example 1: Consider the MA(1) process, X t = Z t + θz t 1 with Z W N[0, σ ]. In this case (1 + θ )σ if h = 0, γ X (h) = θσ if h = ±1, 0 all other h. So ν = (1 + θ + θ)σ = (1 + θ) σ. A 95% confidence interval for µ would be x ± 1.96(1 + θ)σ n. Example : Consider the AR(1) process, X t = φx t 1 + Z t with φ < 1. We have already seen γ(h) = σ φ h (combining (11) and (1)). Therefore 1 φ σ γ(h) = 1 φ (φ + φ + φ ) h=1 = σ 1 φ φ 1 φ. The sum 1 h= γ(h) is the same, so we add in γ(0) to get σ ( ν = 1 φ 1 + φ ) 1 φ σ = 1 φ 1 + φ 1 φ = σ (1 φ). σ N.B.: Don t confuse σ (formula for γ(0)) with (formula for ν). 1 φ (1 φ) Application to a confidence interval for µ. The following example illustrates the general method. Suppose we are given that the process is AR(1) with σ =, φ = 0.75, and that x = 0.95 based on n = 54 observations. What is the 95% confidence interval, and is this statistically significant evidence that µ 0? Solution: In this case ν = (1 0.75) = 3 so the approximate standard error of x is ν n = 3 54 = The confidence interval is 0.95 ± = 0.95 ± 1.51 = ( 0.56,.46). Since 0 is inside the confidence interval, the data do not provide statistically significant evidence that µ 0. 18

19 .7 Estimation of autocovariances and autocorrelations [Note: I didn t actually cover this in class, so you can omit this section if you like. However, I m keeping the material here for completeness. It is all covered in more detail in the text.] Based on data x 1,..., x n, the standard point estimates are ˆγ(h) = 1 n n h t=1 ˆρ(h) = ˆγ(h) γ(0). (x t+ h x n )(x t x n ), In practice we should only estimate these quantities for h much less than n a practical guideline is h 4n. The have the approximate distributional result ˆρ N ( ρ, W ), n or in other words, the vector ˆρ(1), ˆρ(),..., ˆρ(h) has approximately a (multivariate) normal distribution with mean ρ(1), ρ(),..., ρ(h) and covariances of the form Cov(ˆρ i, ˆρ j ) w ij n where the w ij s are given by Bartlett s formula (see p. 61 of text). However, evaluating Bartlett s formula is rather complicated so we shall not use it in practice..8 Prediction based on the infinite past Notation: P t X t+h means the optimal predictor of X t+h (for h 1), given observations X 1,..., X t. P t X t+h means the optimal predictor of X t+h (for h 1), given observations X s, < s t. The second case (prediction based on the infinite past) actually leads to a simpler and more elegant solution, that is often used as an approximation to the first case, so we consider that first. We assume a causal, invertible process that has the equivalent representations X t = Z t = ψ j Z t j, (7) π j X t j, (8) with Z t W N(0, σ ), ψ 0 = π 0 = 1, and ψ j <, π j <. Define P t X t+1 = π j X t+1 j (9) j=1 19

20 and recursively for h > 1, h 1 P t X t+h = π j Pt X t+h j j=1 Theorem. Pt X t+h is the best linear predictor of X t+h given X s, s t, and π j X t+h j. (30) j=h X t+h P t X t+h = h 1 ψ j Z t+h j. (31) Here best linear means the predictor that minimizes the mean squared error, E(X t+h P t X t+h ), among all predictors that are linear combinations of X s, s t. Moreover, (31) shows that the resulting mean squared prediction error is h 1 σ ψ j. (3) Proof. The proof is by induction. First we do it for the case h = 1, then we extend it to any h 1. First note that with P t X t+1 defined by (9), so (31) is true in this case. X t = Z t+1 π j X t j = Z t+1 + P t X t+1 j=1 To prove optimality, suppose Q t X t+1 is some other linear predictor of X t+1 based on X s, s t. In that case we can write P t X t+1 Q t X t+1 = a j X t j for some coefficients a j. Then X t+1 Q t X t+1 = X t+1 P t X t+1 + a j X t j = Z t+1 + a j X t j and so ( E X t+1 Q ) t X t+1 = E Z t+1 + a j X t j = E(Z t+1 ) + E a j X t j + E Z t+1 a j X t j But (this is the key to the whole proof) the third term has expectation 0, because each of the terms Z t+1 X t j for j 0 has expectation 0. This is true because X t is a causal process in other words, X t is a linear combination of Z s, s t, and therefore each term is uncorrelated with Z t+1. 0

21 Therefore ( E X t+1 Q ) t X t+1 = E(Z t+1 ) + E a j X t j. The first term is equal to σ, while the second term is always 0, and can be set = 0 if we choose ( a j = 0 for all j. Therefore this is the optimal case we always have E X t+1 Q ) t X t+1 σ and equality is attained in the case Q t X t+1 = P t X t+1. Now we do the case h > 1. We assume X t+j P t X t+j = j 1 k=0 ψ kz t+j k for j = 0, 1,,..., h 1, then we deduce it for j = h. Then the principle of induction will ensure that (31) is true for every h 1. We therefore calculate X t+h P t X t+h = Z t+h j=1 h 1 π j X t+h j + π j Pt X t+h j + j=1 h 1 ( = Z t+h π j X t+h j P ) t X t+h j h 1 = Z t+h h 1 = Z t+h h 1 = Z t+h j=1 h j 1 π j j=1 k=0 h 1 π j j=1 r=j r r=1 j=1 ψ k Z t+h j k ψ r j Z t+h r π j X t+h j j=h π j ψ r j Z t+h r. (33) But from the identity ( ) π j z j ψ k z k = 1, k=0 the coefficient of z r for r 1 is r π j ψ r j = 0, and hence r π j ψ r j = π 0 ψ r = ψ r. (34) j=1 Substituting (34) in (33) yields (31). 1

22 We conclude the proof by showing that P t X t+h is the optimal predictor when h > 1. Actually, this is the same proof as when h = 1. Suppose Q t X t+h is some other predictor that is also a linear function of X s, s t. Then P t X t+h Q t X t+h = k=0 a k X t k for some coefficients a k. Hence ( ( E X t+h Q ) t X t+h = E X t+h P t X t+h + ) a k X t k k=0 h 1 = E ψ j Z t+h j + a k X t k k=0 ( h 1 ) ( h 1 ) = E ψ j Z t+h j + E a k X t k + E ψ j Z t+h j a k X t k k=0 k=0 But the third term is 0 because each E(Z t+h j X t k ) = 0 when 0 j h 1, k 0. Therefore ( E X t+h Q ) t X t+h E (X t+h P ) t X t+h with equality only if Q t = P t. This completes the proof of the theorem. Example 1. Consider the AR(1) process. In this case, X t = φ j Z t j, Z t = X t φx t 1. in particular, π 0 = 1, π 1 = φ. So P t X t+1 = π 1 X t = φx t, P t X t+ = π 1 Pt X t+1 = φ X t, P t X t+3 = π 1 Pt X t+ = φ 3 X t, etc. The mean squared prediction errors (MSPEs) are and so on. For h = 1: σ, For h = : σ (1 + φ ), For h = 3: σ (1 + φ + φ 4 ), Example. Consider the MA(1) process with θ < 1, for which X t = Z t + θz t 1, Z t = X t θx t 1 + θ X t θ 3 X t 3... So π 0 = 1, π 1 = θ, π = θ, π 3 = θ 3, etc. Thus P t X t+1 = π 1 X t π X t 1 π 3 X t 3... = θx t θ X t 1 + θ 3 X t..., P t X t+ = π 1 Pt X t+1 π X t π 3 X t 1 π 4 X t... = (θ X t θ 3 X t 1 + θ 4 X t...) θ X t + θ 3 X t 1 θ 4 X t +... = 0 and in fact P t X t+h = 0 for any h > 1. This makes sense when you think about it: for an MA(1) process with h > 1, X t+h is uncorrelated with X s, s t, so the optimal predictor should be 0. The MSPEs in this case are σ for h = 1 and σ (1 + θ ) for any h > 1.

23 .9 Prediction based on the finite past Suppose we are trying to predict a variable Y of based on observations W 1,..., W n, where E(Y ) = µ 0, E(W i ) = µ i. we compute the matrix Γ = (γ ij ) where γ ij = Cov(W i, W j ), and the vector γ = (γ i ), where γ i = Cov(W i, Y ). Consider a predictor Ŷ that satisfies Ŷ µ 0 = n a 0 + a i (W i µ i ). (35) i=1 The objective is to choose the weights a 0, a 1,..., a n to minimize the mean squared prediction error (MSPE) E{(Ŷ Y ) }. The idea of writing the formula in the form of (35) (subtracting off the means µ i, i = 0, 1,..., n) is that we immediately suspect that a 0 = 0: this will indeed turn out to be the case. However, it s not at all obvious how to choose a 1,..., a n. Define S = E{(Ŷ Y ) } { } n = E a 0 + a i (W i µ) (Y µ 0 ). i=1 So by differentiating term by term, [ { S = E a 0 + a 0 = a 0 }] n a i (W i µ) (Y µ 0 ) i=1 which is 0 if and only if a 0 = 0, confirming that choice. (It is obvious that a 0 = 0 is a minimum rather than some other stationary point I won t elaborate on that.) Henceforth we set a 0 = 0. For any index j in the range 1,,...,n, [ { }] S n = E a j (W j µ) a 0 + a i (W i µ) (Y µ 0 ) a j Thus, the solution satisfies n = a i γ ij γ j. i=1 i=1 γ j = i a i γ i j for j = 1,..., n, or in matrix notation γ = Γa (36) with the vector γ and matrix Γ defined as above, and a being the column vector of a j s, i.e. a = a 1. a n. 3

24 In nearly all cases of interest Γ is non-singular, so (36) is equivalent to the direct solution a = Γ 1 γ. If we expand S itself, we also have S = n n n a i a j γ ij a i γ i + V ar(y ) i=1 j=1 i=1 = a T Γa a T γ + V ar(y ) = V ar(y ) a T γ. (37) Therefore, (37) is an expression for the MSPE of the final predictor. Example 1: Prediction of AR(1). Suppose W 1,..., W n are equated with X 1,..., X n, and Y = X n+1. Recall γ X (h) = σ φ 1 φ h. So 1 φ φ... φ n 1 φ n σ φ 1 φ... φ n φ n 1 Γ = φ φ 1... φ n 3 1 φ, γ = σ φ n 1 φ. (38) φ n 1 φ n φ n The obvious guess is ˆX n+1 = φx n, which translates to 0 0 a =.. (39) 0 φ So we have to verify that if a is defined by (39), and γ and Γ are defined by (38), Γa = γ. This is easily verified directly. The MSPE in this case, by (37), is V ar(y ) a T γ = σ 1 φ σ φ 1 φ = σ which agrees with the result derived earlier for the case of prediction from the infinite past. Example : Predicting a missing value. Again we assume the basic process is AR(1) and assume we know X 1 and X 3, but want to predict X. This is a simple example of the problem of filling in a missing value in a time series, a common problem in many datasets with physical observations. ) For this problem we set W = Γ = ( X1 X 3 σ ( 1 φ 1 φ φ 1 and Y = X. Then ) 4, γ = σ ( φ 1 φ φ ).. φ

25 Solve ( 1 φ φ 1 ) ( a1 a ) = ( φ φ ) This solves to a 1 = a = φ 1+φ or in other words ˆX = φ 1 + φ (X 1 + X 3 ). In this case we have a T γ = σ 1 φ V ar(x ) a T γ = φ 1+φ, so the MSPE is σ ( ) 1 φ 1 φ 1 + φ = σ 1 + φ. Problems [These are variants on the same theme and examples of possible problems for the midterm. I suggest you figure them out for yourself before looking up the solutions that I ve posted as a separate file.] 1. Consider the MA(1) process, X t = Z t + θz t 1, Z t W N[0, σ ]. Find the optimal predictor of X 3 given X 1 and X, and determine its MSPE.. [This one is a bit harder than I d probably put on the midterm, but please have a go at it before looking up the solution!] For the AR(1) process, find the optimal predictor of X 3 given X 1, X, X 4, X 5, and determine its MSPE. Summary Comments The text goes through a lot of material about recursive solutions to the prediction equations (Durbin-Levinson method, etc.) that I ve decided to omit this class (so yo re not expected to know that material). On the other hand, I decided to do more about the case of prediction from the infinite past. The proof on pp. 0 is included here (and was covered in class) because I think it s important that you should have some idea where these results come from and not just present them as some magic formula. However, I m not expecting you to learn the proof. What you should be able to do is to use the result to solve problems, for both the infinite-past and finite-past versions of the problem, and the questions that were set for homework and the examples given here are intended to provide practice at doing that. 5

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Statistics of stochastic processes

Statistics of stochastic processes Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014

More information

Part II. Time Series

Part II. Time Series Part II Time Series 12 Introduction This Part is mainly a summary of the book of Brockwell and Davis (2002). Additionally the textbook Shumway and Stoffer (2010) can be recommended. 1 Our purpose is to

More information

Time Series I Time Domain Methods

Time Series I Time Domain Methods Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT

More information

STAT 443 (Winter ) Forecasting

STAT 443 (Winter ) Forecasting Winter 2014 TABLE OF CONTENTS STAT 443 (Winter 2014-1141) Forecasting Prof R Ramezan University of Waterloo L A TEXer: W KONG http://wwkonggithubio Last Revision: September 3, 2014 Table of Contents 1

More information

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis -- An Introduction -- AMS 586 Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data

More information

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models Tessa L. Childers-Day February 8, 2013 1 Introduction Today s section will deal with topics such as: the mean function, the auto- and cross-covariance

More information

ITSM-R Reference Manual

ITSM-R Reference Manual ITSM-R Reference Manual George Weigt February 11, 2018 1 Contents 1 Introduction 3 1.1 Time series analysis in a nutshell............................... 3 1.2 White Noise Variance.....................................

More information

Applied time-series analysis

Applied time-series analysis Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 18, 2011 Outline Introduction and overview Econometric Time-Series Analysis In principle,

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Covariances of ARMA Processes

Covariances of ARMA Processes Statistics 910, #10 1 Overview Covariances of ARMA Processes 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation

More information

Circle the single best answer for each multiple choice question. Your choice should be made clearly.

Circle the single best answer for each multiple choice question. Your choice should be made clearly. TEST #1 STA 4853 March 6, 2017 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. There are 32 multiple choice

More information

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION ARIMA MODELS: IDENTIFICATION A. Autocorrelations and Partial Autocorrelations 1. Summary of What We Know So Far: a) Series y t is to be modeled by Box-Jenkins methods. The first step was to convert y t

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

STAT 248: EDA & Stationarity Handout 3

STAT 248: EDA & Stationarity Handout 3 STAT 248: EDA & Stationarity Handout 3 GSI: Gido van de Ven September 17th, 2010 1 Introduction Today s section we will deal with the following topics: the mean function, the auto- and crosscovariance

More information

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be  ed to TIME SERIES Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be emailed to Y.Chen@statslab.cam.ac.uk. 1. Let {X t } be a weakly stationary process with mean zero and let

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment:

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment: Stochastic Processes: I consider bowl of worms model for oscilloscope experiment: SAPAscope 2.0 / 0 1 RESET SAPA2e 22, 23 II 1 stochastic process is: Stochastic Processes: II informally: bowl + drawing

More information

Ch 8. MODEL DIAGNOSTICS. Time Series Analysis

Ch 8. MODEL DIAGNOSTICS. Time Series Analysis Model diagnostics is concerned with testing the goodness of fit of a model and, if the fit is poor, suggesting appropriate modifications. We shall present two complementary approaches: analysis of residuals

More information

3 Theory of stationary random processes

3 Theory of stationary random processes 3 Theory of stationary random processes 3.1 Linear filters and the General linear process A filter is a transformation of one random sequence {U t } into another, {Y t }. A linear filter is a transformation

More information

Identifiability, Invertibility

Identifiability, Invertibility Identifiability, Invertibility Defn: If {ǫ t } is a white noise series and µ and b 0,..., b p are constants then X t = µ + b 0 ǫ t + b ǫ t + + b p ǫ t p is a moving average of order p; write MA(p). Q:

More information

STOR 356: Summary Course Notes Part III

STOR 356: Summary Course Notes Part III STOR 356: Summary Course Notes Part III Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, NC 27599-3260 rls@email.unc.edu April 23, 2008 1 ESTIMATION

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Econ 424 Time Series Concepts

Econ 424 Time Series Concepts Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length

More information

STAD57 Time Series Analysis. Lecture 8

STAD57 Time Series Analysis. Lecture 8 STAD57 Time Series Analysis Lecture 8 1 ARMA Model Will be using ARMA models to describe times series dynamics: ( B) X ( B) W X X X W W W t 1 t1 p t p t 1 t1 q tq Model must be causal (i.e. stationary)

More information

Time Series Examples Sheet

Time Series Examples Sheet Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,

More information

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1 Forecasting using R Rob J Hyndman 2.4 Non-seasonal ARIMA models Forecasting using R 1 Outline 1 Autoregressive models 2 Moving average models 3 Non-seasonal ARIMA models 4 Partial autocorrelations 5 Estimation

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis Chapter 12: An introduction to Time Series Analysis Introduction In this chapter, we will discuss forecasting with single-series (univariate) Box-Jenkins models. The common name of the models is Auto-Regressive

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

STAT 720 TIME SERIES ANALYSIS. Lecture Notes. Dewei Wang. Department of Statistics. University of South Carolina

STAT 720 TIME SERIES ANALYSIS. Lecture Notes. Dewei Wang. Department of Statistics. University of South Carolina STAT 720 TIME SERIES ANALYSIS Spring 2015 Lecture Notes Dewei Wang Department of Statistics University of South Carolina 1 Contents 1 Introduction 1 1.1 Some examples........................................

More information

Calculation of ACVF for ARMA Process: I consider causal ARMA(p, q) defined by

Calculation of ACVF for ARMA Process: I consider causal ARMA(p, q) defined by Calculation of ACVF for ARMA Process: I consider causal ARMA(p, q) defined by φ(b)x t = θ(b)z t, {Z t } WN(0, σ 2 ) want to determine ACVF {γ(h)} for this process, which can be done using four complementary

More information

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Forecasting 1. Let X and Y be two random variables such that E(X 2 ) < and E(Y 2 )

More information

STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5)

STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5) STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5) Outline 1 Announcements 2 Autocorrelation and Cross-Correlation 3 Stationary Time Series 4 Homework 1c Arthur Berg STA

More information

Contents. 1 Time Series Analysis Introduction Stationary Processes State Space Modesl Stationary Processes 8

Contents. 1 Time Series Analysis Introduction Stationary Processes State Space Modesl Stationary Processes 8 A N D R E W T U L L O C H T I M E S E R I E S A N D M O N T E C A R L O I N F E R E N C E T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Time Series Analysis 5

More information

Basics: Definitions and Notation. Stationarity. A More Formal Definition

Basics: Definitions and Notation. Stationarity. A More Formal Definition Basics: Definitions and Notation A Univariate is a sequence of measurements of the same variable collected over (usually regular intervals of) time. Usual assumption in many time series techniques is that

More information

MATH 5075: Time Series Analysis

MATH 5075: Time Series Analysis NAME: MATH 5075: Time Series Analysis Final For the entire test {Z t } WN(0, 1)!!! 1 1) Let {Y t, t Z} be a stationary time series with EY t = 0 and autocovariance function γ Y (h). Assume that a) Show

More information

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

CHAPTER 8 MODEL DIAGNOSTICS. 8.1 Residual Analysis

CHAPTER 8 MODEL DIAGNOSTICS. 8.1 Residual Analysis CHAPTER 8 MODEL DIAGNOSTICS We have now discussed methods for specifying models and for efficiently estimating the parameters in those models. Model diagnostics, or model criticism, is concerned with testing

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

Homework 2. For the homework, be sure to give full explanations where required and to turn in any relevant plots.

Homework 2. For the homework, be sure to give full explanations where required and to turn in any relevant plots. Homework 2 1 Data analysis problems For the homework, be sure to give full explanations where required and to turn in any relevant plots. 1. The file berkeley.dat contains average yearly temperatures for

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1 4.2 Autoregressive (AR) Moving average models are causal linear processes by definition. There is another class of models, based on a recursive formulation similar to the exponentially weighted moving

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University Topic 4 Unit Roots Gerald P. Dwyer Clemson University February 2016 Outline 1 Unit Roots Introduction Trend and Difference Stationary Autocorrelations of Series That Have Deterministic or Stochastic Trends

More information

Non-Stationary Time Series and Unit Root Testing

Non-Stationary Time Series and Unit Root Testing Econometrics II Non-Stationary Time Series and Unit Root Testing Morten Nyboe Tabor Course Outline: Non-Stationary Time Series and Unit Root Testing 1 Stationarity and Deviation from Stationarity Trend-Stationarity

More information

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models Module 3 Descriptive Time Series Statistics and Introduction to Time Series Models Class notes for Statistics 451: Applied Time Series Iowa State University Copyright 2015 W Q Meeker November 11, 2015

More information

FE570 Financial Markets and Trading. Stevens Institute of Technology

FE570 Financial Markets and Trading. Stevens Institute of Technology FE570 Financial Markets and Trading Lecture 5. Linear Time Series Analysis and Its Applications (Ref. Joel Hasbrouck - Empirical Market Microstructure ) Steve Yang Stevens Institute of Technology 9/25/2012

More information

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn } Stochastic processes Time series are an example of a stochastic or random process Models for time series A stochastic process is 'a statistical phenomenon that evolves in time according to probabilistic

More information

Modeling and forecasting global mean temperature time series

Modeling and forecasting global mean temperature time series Modeling and forecasting global mean temperature time series April 22, 2018 Abstract: An ARIMA time series model was developed to analyze the yearly records of the change in global annual mean surface

More information

Midterm Suggested Solutions

Midterm Suggested Solutions CUHK Dept. of Economics Spring 2011 ECON 4120 Sung Y. Park Midterm Suggested Solutions Q1 (a) In time series, autocorrelation measures the correlation between y t and its lag y t τ. It is defined as. ρ(τ)

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

Characteristics of Time Series

Characteristics of Time Series Characteristics of Time Series Al Nosedal University of Toronto January 12, 2016 Al Nosedal University of Toronto Characteristics of Time Series January 12, 2016 1 / 37 Signal and Noise In general, most

More information

3 Time Series Regression

3 Time Series Regression 3 Time Series Regression 3.1 Modelling Trend Using Regression Random Walk 2 0 2 4 6 8 Random Walk 0 2 4 6 8 0 10 20 30 40 50 60 (a) Time 0 10 20 30 40 50 60 (b) Time Random Walk 8 6 4 2 0 Random Walk 0

More information

Non-Stationary Time Series and Unit Root Testing

Non-Stationary Time Series and Unit Root Testing Econometrics II Non-Stationary Time Series and Unit Root Testing Morten Nyboe Tabor Course Outline: Non-Stationary Time Series and Unit Root Testing 1 Stationarity and Deviation from Stationarity Trend-Stationarity

More information

Ch 6. Model Specification. Time Series Analysis

Ch 6. Model Specification. Time Series Analysis We start to build ARIMA(p,d,q) models. The subjects include: 1 how to determine p, d, q for a given series (Chapter 6); 2 how to estimate the parameters (φ s and θ s) of a specific ARIMA(p,d,q) model (Chapter

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

Lecture 4a: ARMA Model

Lecture 4a: ARMA Model Lecture 4a: ARMA Model 1 2 Big Picture Most often our goal is to find a statistical model to describe real time series (estimation), and then predict the future (forecasting) One particularly popular model

More information

Introduction to Time Series Analysis. Lecture 11.

Introduction to Time Series Analysis. Lecture 11. Introduction to Time Series Analysis. Lecture 11. Peter Bartlett 1. Review: Time series modelling and forecasting 2. Parameter estimation 3. Maximum likelihood estimator 4. Yule-Walker estimation 5. Yule-Walker

More information

5 Transfer function modelling

5 Transfer function modelling MSc Further Time Series Analysis 5 Transfer function modelling 5.1 The model Consider the construction of a model for a time series (Y t ) whose values are influenced by the earlier values of a series

More information

Parameter estimation: ACVF of AR processes

Parameter estimation: ACVF of AR processes Parameter estimation: ACVF of AR processes Yule-Walker s for AR processes: a method of moments, i.e. µ = x and choose parameters so that γ(h) = ˆγ(h) (for h small ). 12 novembre 2013 1 / 8 Parameter estimation:

More information

6 NONSEASONAL BOX-JENKINS MODELS

6 NONSEASONAL BOX-JENKINS MODELS 6 NONSEASONAL BOX-JENKINS MODELS In this section, we will discuss a class of models for describing time series commonly referred to as Box-Jenkins models. There are two types of Box-Jenkins models, seasonal

More information

Ross Bettinger, Analytical Consultant, Seattle, WA

Ross Bettinger, Analytical Consultant, Seattle, WA ABSTRACT DYNAMIC REGRESSION IN ARIMA MODELING Ross Bettinger, Analytical Consultant, Seattle, WA Box-Jenkins time series models that contain exogenous predictor variables are called dynamic regression

More information

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 1. Construct a martingale difference process that is not weakly stationary. Simplest e.g.: Let Y t be a sequence of independent, non-identically

More information

Generalised AR and MA Models and Applications

Generalised AR and MA Models and Applications Chapter 3 Generalised AR and MA Models and Applications 3.1 Generalised Autoregressive Processes Consider an AR1) process given by 1 αb)x t = Z t ; α < 1. In this case, the acf is, ρ k = α k for k 0 and

More information

Time Series Outlier Detection

Time Series Outlier Detection Time Series Outlier Detection Tingyi Zhu July 28, 2016 Tingyi Zhu Time Series Outlier Detection July 28, 2016 1 / 42 Outline Time Series Basics Outliers Detection in Single Time Series Outlier Series Detection

More information

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

The autocorrelation and autocovariance functions - helpful tools in the modelling problem The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,

More information

Non-Stationary Time Series and Unit Root Testing

Non-Stationary Time Series and Unit Root Testing Econometrics II Non-Stationary Time Series and Unit Root Testing Morten Nyboe Tabor Course Outline: Non-Stationary Time Series and Unit Root Testing 1 Stationarity and Deviation from Stationarity Trend-Stationarity

More information

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION MAS451/MTH451 Time Series Analysis TIME ALLOWED: 2 HOURS

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION MAS451/MTH451 Time Series Analysis TIME ALLOWED: 2 HOURS NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION 2012-2013 MAS451/MTH451 Time Series Analysis May 2013 TIME ALLOWED: 2 HOURS INSTRUCTIONS TO CANDIDATES 1. This examination paper contains FOUR (4)

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Basic concepts and terminology: AR, MA and ARMA processes

Basic concepts and terminology: AR, MA and ARMA processes ECON 5101 ADVANCED ECONOMETRICS TIME SERIES Lecture note no. 1 (EB) Erik Biørn, Department of Economics Version of February 1, 2011 Basic concepts and terminology: AR, MA and ARMA processes This lecture

More information

Minitab Project Report - Assignment 6

Minitab Project Report - Assignment 6 .. Sunspot data Minitab Project Report - Assignment Time Series Plot of y Time Series Plot of X y X 7 9 7 9 The data have a wavy pattern. However, they do not show any seasonality. There seem to be an

More information

Class 1: Stationary Time Series Analysis

Class 1: Stationary Time Series Analysis Class 1: Stationary Time Series Analysis Macroeconometrics - Fall 2009 Jacek Suda, BdF and PSE February 28, 2011 Outline Outline: 1 Covariance-Stationary Processes 2 Wold Decomposition Theorem 3 ARMA Models

More information

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA CHAPTER 6 TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA 6.1. Introduction A time series is a sequence of observations ordered in time. A basic assumption in the time series analysis

More information

Akaike criterion: Kullback-Leibler discrepancy

Akaike criterion: Kullback-Leibler discrepancy Model choice. Akaike s criterion Akaike criterion: Kullback-Leibler discrepancy Given a family of probability densities {f ( ; ψ), ψ Ψ}, Kullback-Leibler s index of f ( ; ψ) relative to f ( ; θ) is (ψ

More information

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties Module 4 Stationary Time Series Models Part 1 MA Models and Their Properties Class notes for Statistics 451: Applied Time Series Iowa State University Copyright 2015 W. Q. Meeker. February 14, 2016 20h

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

Ch 4. Models For Stationary Time Series. Time Series Analysis

Ch 4. Models For Stationary Time Series. Time Series Analysis This chapter discusses the basic concept of a broad class of stationary parametric time series models the autoregressive moving average (ARMA) models. Let {Y t } denote the observed time series, and {e

More information

Chapter 8: Model Diagnostics

Chapter 8: Model Diagnostics Chapter 8: Model Diagnostics Model diagnostics involve checking how well the model fits. If the model fits poorly, we consider changing the specification of the model. A major tool of model diagnostics

More information

Dynamic Time Series Regression: A Panacea for Spurious Correlations

Dynamic Time Series Regression: A Panacea for Spurious Correlations International Journal of Scientific and Research Publications, Volume 6, Issue 10, October 2016 337 Dynamic Time Series Regression: A Panacea for Spurious Correlations Emmanuel Alphonsus Akpan *, Imoh

More information

Read Section 1.1, Examples of time series, on pages 1-8. These example introduce the book; you are not tested on them.

Read Section 1.1, Examples of time series, on pages 1-8. These example introduce the book; you are not tested on them. TS Module 1 Time series overview (The attached PDF file has better formatting.)! Model building! Time series plots Read Section 1.1, Examples of time series, on pages 1-8. These example introduce the book;

More information

Univariate ARIMA Models

Univariate ARIMA Models Univariate ARIMA Models ARIMA Model Building Steps: Identification: Using graphs, statistics, ACFs and PACFs, transformations, etc. to achieve stationary and tentatively identify patterns and model components.

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

Econometrics of financial markets, -solutions to seminar 1. Problem 1

Econometrics of financial markets, -solutions to seminar 1. Problem 1 Econometrics of financial markets, -solutions to seminar 1. Problem 1 a) Estimate with OLS. For any regression y i α + βx i + u i for OLS to be unbiased we need cov (u i,x j )0 i, j. For the autoregressive

More information

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS)

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS) MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS) 15 April 2016 (180 minutes) Professor: R. Kulik Student Number: Name: This is closed book exam. You are allowed to use one double-sided A4 sheet of notes.

More information

Exercises - Time series analysis

Exercises - Time series analysis Descriptive analysis of a time series (1) Estimate the trend of the series of gasoline consumption in Spain using a straight line in the period from 1945 to 1995 and generate forecasts for 24 months. Compare

More information

STAT Financial Time Series

STAT Financial Time Series STAT 6104 - Financial Time Series Chapter 4 - Estimation in the time Domain Chun Yip Yau (CUHK) STAT 6104:Financial Time Series 1 / 46 Agenda 1 Introduction 2 Moment Estimates 3 Autoregressive Models (AR

More information

Classic Time Series Analysis

Classic Time Series Analysis Classic Time Series Analysis Concepts and Definitions Let Y be a random number with PDF f Y t ~f,t Define t =E[Y t ] m(t) is known as the trend Define the autocovariance t, s =COV [Y t,y s ] =E[ Y t t

More information

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics David B. University of South Carolina Department of Statistics What are Time Series Data? Time series data are collected sequentially over time. Some common examples include: 1. Meteorological data (temperatures,

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models Contents Mathematical Reasoning 3.1 Mathematical Models........................... 3. Mathematical Proof............................ 4..1 Structure of Proofs........................ 4.. Direct Method..........................

More information