2 A. Neumaier and T. Schneider. INTRODUCTION We consider modeling of stationary time series data by a multivariate autoregressive process of order p (

Size: px
Start display at page:

Download "2 A. Neumaier and T. Schneider. INTRODUCTION We consider modeling of stationary time series data by a multivariate autoregressive process of order p ("

Transcription

1 Multivariate Autoregressive and Ornstein-Uhlenbeck Processes: Estimates for Order, Parameters, Spectral Information, and Condence Regions Arnold Neumaier Universitat Wien and Tapio Schneider Princeton University Fast methods are presented for identifying a multivariate autoregressive model that is appropriate to represent large, potentially high-dimensional time series data as they occur, e.g., in geophysical applications. The algorithms are based on the concept of least-squares estimation, which is known to yield consistent and asymptotically unbiased coecient matrix estimates that also perform well on small samples. For order selection, a new and useful modication of Schwarz' Bayesian Criterion is introduced. As the interpretation of a model is often facilitated by an analysis of its eigenmodes, the spectral decomposition of autoregressive processes of arbitrary order is considered. The discussion includes the computation of condence intervals for the estimated eigenmodes, eigenvalues, and other spectral information. Numerical experiments demonstrate the eciency of the proposed estimation techniques. Further, it is shown that a discrete sample of an Ornstein-Uhlenbeck process forms a rst order autoregressive (AR()) process. This fact reduces both the simulation and estimation of Ornstein-Uhlenbeck processes to the simulation and estimation of a corresponding AR() process. Categories and Subject Descriptors: G.3 [Mathematics of Computing]: Probability and Statistics statistical computing, statistical software; I.6 [Computing Methodologies]: Simulation and Modeling; I.6.4 [Simulation and Modeling]: Model Validation and Analysis; J.2 [Computer Applications]: Physical Sciences and Engineering Earth and atmospheric sciences General Terms: Algorithms, Performance, Reliability, Theory Additional Key Words and Phrases: condence regions, least squares, model identication, multivariate autoregressive process, order selection, Ornstein-Uhlenbeck process, parameter estimation, spectral decomposition A substantial part of this work was performed while the second author was with the Department of Physics at the University of Washington in Seattle. Support by the Fulbright Commission during that time and continuing support by the German National Scholarship Foundation are gratefully acknowledged. Name: Arnold Neumaier Aliation: Institut fur Mathematik, Universitat Wien Address: Strudlhofgasse 4, A-9 Wien, Austria, neum@cma.univie.ac.at Name: Tapio Schneider Aliation: AOS Program, Princeton University Address: P.O. Box CN7, Princeton, NJ 8544, U.S.A., tapio@splash.princeton.edu

2 2 A. Neumaier and T. Schneider. INTRODUCTION We consider modeling of stationary time series data by a multivariate autoregressive process of order p (AR(p) process), v = w + px l= A l v?l + " ; " = noise(c); = ; : : : ; N; () where v ( =?p; : : : ; N) is a time series of m-dimensional state vectors, observed at equally spaced instants, and " = noise(c) is shorthand for saying that the " are uncorrelated m-dimensional random vectors with zero mean and covariance matrix C. The m-dimensional parameter vector w of intercept terms is included to allow for a nonzero mean of the time series (cf., e.g., Lutkepohl [993, Chapter 2]), hv i = (I? A?? A p )? w; where hi stands for the expectation value operation. The process of identifying an AR model that is appropriate to represent observed time series data involves several intertwined stages (cf. Tiao and Box [98]): selecting the order of the model, estimating the model parameters, and diagnostic checking of the tted model's adequacy. In the following, we shall focus on the selection of the model order p and the estimation of the unknown parameters A ; : : : ; A p, w, and C. As this study was motivated by problems in geophysical climate research that involve large amounts of time series data, the emphasis is on computational eciency of the presented algorithms. If the process order p is known, various techniques are available to estimate the coecient matrices A ; : : : ; A p, the intercept vector w, and the covariance matrix C from an observed time series of state vectors v. The maximum likelihood (ML) approach [Ansley and Kohn 983] treats an AR model essentially by assuming that all states v consist of correlated noise whose covariance matrix derives from (). Since this state covariance matrix depends on the unknown parameters, maximizing the likelihood results in an iterative procedure, which leads to estimates that are not only asymptotically ecient as N!, but also show good small-sample performance [Brockwell and Davis 99]. In addition, with the reparameterization due to Ansley and Kohn [986], stability of the tted model can be enforced. While these are desirable properties for an estimation method, the computational complexity of the ML procedure renders it slow, so that for large problems, fast algorithms are called for that either are suciently accurate to replace it completely or provide good starting points for a subsequent ML iteration. Among commonly used estimates that can be computed via fast algorithms are the Yule-Walker estimates (cf., e.g., Brockwell and Davis [99], Honerkamp [994, pp. 458.]), Burg-type estimates [Burg 978a; Burg 978b], and least squares estimates (see, e.g., Lutkepohl [993, Chapter 3]), each one of them obtaining its theoretical justication from considerations of its asymptotic behavior as N!. While these estimates are all asymptotically unbiased and, in fact, can be shown to be asymptotically equivalent, their nite-sample behavior diers signicantly. Tjstheim and Paulsen [983] computed expressions for the large-sample bias of Yule-Walker and least squares estimates. Both their theory and simulations indicate that, at least in the univariate second-order case, the Yule-Walker estimates can be

3 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 3 severely biased when the process is only marginally stable, even when the sample is comparatively large. In fact, the bias of the estimates can be appreciably larger than their standard deviation. The least squares estimates, on the other hand, are free from this defect. Our own numerical experiments with these algorithms, reported in Section 7, conrm these ndings and demonstrate that, at least in our examples, the least squares estimates also have smaller variance than Burg's estimates. Indeed, in these experiments the least squares approach yields estimates of about the same quality as the ML procedure, typically at a small fraction of the ML method's computational expense. In the discussion so far, the model order p was assumed to be known a priori, which, however, is rarely the case in practice. On the contrary, the order usually has to be estimated from the data at hand, too. In selecting the model order one has to nd a trade-o between the exibility gained by increasing the number of adjustable parameters with increasing p and the danger of overtting, i.e., tailoring the t too closely to the particular process realization that was observed. It is common practice to select an optimal model order by consecutively tting models of increasing order p p max to the data, evaluating a suitably dened goodness-of- t measure (an order selection criterion) for each p, and eventually choosing as the best-tting model the one that optimizes the employed order selection criterion. However, for large data sets this may become a tedious and computationally costly procedure, as the evaluation of the order selection criterion at each order p requires estimation of the corresponding model parameters. Fortunately, as we shall show below, the least squares computations can be organized in an ecient manner: For a given p max, one can compute with essentially the same work as that needed for the computation of the least squares estimates for an AR(p max ) model both an optimal order p opt p max and the least squares estimate for the optimal model. If the resulting model has order p opt < p max, a small prize has to be paid for this eciency in that the rst p max? p opt data points of the time series are ignored. Frequently employed order selection criteria are Akaike's Final Prediction Error FPE [Akaike 97] and Schwarz' Bayesian Criterion SBC [Schwarz 978]. Comparing the relative merits of these and several other criteria in an extensive simulation study with low-order, low-dimensional autoregressive processes, Lutkepohl [985] found that SBC selected the correct order most frequently and also led to the smallest mean squared forecasting error. We shall present a modication MSC of SBC that, in an experiment analogous to one of Lutkepohl's, turns out to estimate the correct model order even more reliably than the original SBC, in particular for short time series. In many applications, after having obtained a model that adequately represents a given time series, one seeks to gain an understanding of the dynamics of the observed system by analyzing and interpreting the tted model. As suggested by Tiao and Box [98], to this end it is often informative to perform various eigenvalue and eigenvector analyses. For example, Box and Tiao [977] propose a decomposition of a multivariate autoregressive process into components that are ordered from least to most predictable, the most predictable components often being nearly unstable and representing dynamic growth characteristics of the time series. Box and Tiao [977] apply this analysis to the interpretation of economic time series that exhibit

4 4 A. Neumaier and T. Schneider dynamic growth. In the investigation of stochastic dynamical systems with oscillatory behavior, on the other hand, the spectral decomposition of autoregressive processes can be expected to be useful. For example in the atmospheric and oceanic sciences, the analysis of an AR() process' eigenmodes often called principal oscillation patterns, a term coined by Hasselmann [988] has become a widely used data analysis tool (see von Storch et al. [988], Blumenthal [99], Schnur et al. [993], Xue et al. [994], von Storch et al. [995], Penland and Sardeshmukh [995b], and references therein to name but a few examples). However, all these studies utilized rst-order processes, a fact that limits the applicability of this technique to the fairly restricted class of time series that can be modeled adequately by such processes. In order to gain the ability to model a wider class of time series data, we shall generalize the spectral decomposition of AR() processes to processes of arbitrary order. Furthermore, we shall introduce an index that orders the resulting eigenmodes according to their relative dynamical importance, and it will be demonstrated that condence intervals for the estimated spectral information can be obtained at low cost along with the estimates themselves. Our approach to nding condence intervals for the spectral information contrasts to that put forward in a recent paper by Penland and Sardeshmukh [995a], who relied on a costly Monte-Carlo simulation method to get condence intervals for estimated eigenvectors and eigenvalues of rst-order processes. Rather than autoregressive processes, Penland and Sardeshmukh [995a] and several of the other authors cited above were actually using Ornstein-Uhlenbeck processes, the continuous analogs of the discrete AR() processes. This often led to situations in which estimates with undesirable properties such as indenite covariance matrix estimates were employed and very expensive direct numerical integration schemes were used to simulate Ornstein-Uhlenbeck processes. In circumstances in which the modeling approach is purely empirical and no physical reason demands a model with a continuous time variable, the intricacies arising from modeling with continuous processes can be avoided by resorting to discrete models. In those cases, however, where one still desires to model data with Ornstein-Uhlenbeck processes, the abovementioned diculties can be circumvented by noting that, as we shall show, a discrete sample of an Ornstein-Uhlenbeck process, taken at regular intervals of arbitrary length, forms an AR() process. This fact reduces the parameter estimation problem for a multivariate Ornstein-Uhlenbeck process to that of a related AR() process, and it can be exploited to greatly simplify the simulation of Ornstein-Uhlenbeck processes. The paper is structured as follows: In Section 2 numerical techniques for least squares estimation of parameters in autoregressive models are presented, and in Section 3 the computation of condence regions for estimated parameters and functions thereof is discussed. Section 4 deals with criteria for the selection of an autoregressive model's order, including an account on their ecient computation that draws from the results of Section 2. This is followed by a discussion of the spectral decomposition of autoregressive processes in Section 5, which contains the relevant formulas for the calculation of condence intervals for eigenvalues, eigenmodes, the stationary state covariance matrix, and derived spectral information. In Section 6 we relate equidistant samples of an Ornstein-Uhlenbeck process to a corresponding AR() process and discuss the implications for the estimation, simulation, and spec-

5 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 5 tral decomposition of Ornstein-Uhlenbeck processes. Finally, we report in Section 7 results of numerical experiments with the presented algorithms. Most of the methods proposed below were implemented in the Matlab package ARfit which is described in a companion paper [Schneider and Neumaier 997]. Where appropriate in the subsequent discussions, we will point the reader to modules in ARfit that contain implementations of the algorithms under consideration. Notation. A :k denotes the kth column of the matrix A. A T is the transpose, and A y the conjugate transpose of A. The inverse of A y is written as A?y, and the superscript denotes complex conjugation. 2. LEAST SQUARES ESTIMATION FOR AUTOREGRESSIVE MODELS Throughout this section, we suppose that the time series v?p ; : : : ; v N is known to be a realization of a pth order autoregressive process. We consider the least squares estimation of the unknown model parameters A ; : : : ; A p, w, and C from the time series data, deferring the case of unknown model order to Section 4. The following Section 2. serves to introduce the notation by means of a brief review of the least squares estimation technique; for details of its derivation and a more general discussion of the estimates' properties, the reader may consult, for example, Lutkepohl [993, Chapter 3]. Subsequently, in Section 2.2, we shall present an algorithm for the numerical computation of the estimates. 2. Form and properties of least squares estimates The least squares estimates for autoregressive processes are most conveniently expressed when the AR model () is cast in the form with coecient matrix and v = Au + " ; " = noise(c); = ; : : : ; N; (2) u = A = (w A A p ); (3) v?. v?p C A 2 IRn ; n = mp + : (4) The key step in the derivation of the least squares estimates is to view (2) as a linear regression model with xed predictors u, which is an approximation as for measured time series the u are realizations of a random variable. However, (4) implies that the assumption of xed predictors amounts to treating u = v. v?p C A available from ~ neum/software/art/

6 6 A. Neumaier and T. Schneider as a vector of xed initial values, and since the relative impact of the initial condition vanishes as the time series length N approaches innity, using parameter estimates for the regression model (2) in the autoregressive model () can be expected to be asymptotically correct. For the linear regression model (2), the least squares procedure yields the best linear unbiased estimates. Introducing the matrices U = NX = u u T ; V = N X = v v T ; W = N X = the estimate for the coecient matrix A can be written as and an estimate for the covariance matrix C is given by ^C = N? n NX = v u T ; (5) ^A = W U? ; (6) ^" ^" T with ^" = v? ^Au : In the covariance matrix estimate ^C, an adjustment for degrees of freedom was applied, resulting in the factor =(N? n) in place of =N, as for the regression model this adjustment leads to an unbiased estimator. Alternatively, the covariance estimate can be expressed as ^C = N? n (V? W U? W T ); (7) which is proportional to the Schur complement of the matrix U W T W V and therefore assured to be positive semidenite. Though these estimates are derived for the regression model with xed predictors, which only approximates the AR model, it can nevertheless be shown that, under fairly general conditions on the noise term in (), the estimators (6) and (7) when applied to the AR model with coecient matrix (3) and \predictors" (4) are still consistent and asymptotically normal (see, e.g., Lutkepohl [993, Chapter 3]). In Section 3, we shall take advantage of these facts in the computation of condence intervals for the estimated parameters. However, rst we consider the numerical solution of the matrix equations (6) and (7) for the parameter estimates. 2.2 Numerical considerations As noted above, ^C is the Schur complement of the matrix U W T NX u? = u T W V v v T = K T K; (8) where = K = u T v T. u T N. vt N C A : (9)

7 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 7 A QR factorization of K, with R R Q orthogonal; R = 2 R 22 K = QR () upper triangular; provides an easy-to-implement way of computing the parameter estimates (6) and (7) numerically: Rewriting (8) as U W T = R T R T R = R RR T 2 W V R2 T R R2 T R 2 + R22 T R ; 22 one nds the representation ^A? = R? R T 2 ; ^C = N? n RT 22 R 22 () for the least squares estimates of the coecient matrix A and the covariance matrix C, respectively. Representation () yields ^A as the solution of triangular systems of equations and provides ^C in a factored form that exhibits the semideniteness explicitly. As an alternative to the QR factorization, one can obtain R = L T also from a Cholesky decomposition U W T W V = LL T : However, when U or ^C are ill-conditioned, using a QR factorization to calculate () is numerically more stable than a Cholesky decomposition. For this reason, the former is our recommended approach. To reduce the eect of rounding errors in extremely ill-conditioned cases, it is protable to calculate a Cholesky factorization of K T K + "D 2 = R T R, where D = Diag?P i jk ijj. This means that one has to use in place of () a regularized QR factorization of the form K p"d = QR: ARfit's module ar produces the least squares estimates for multivariate autoregressive models using this QR factorization technique. Probably the above crude regularization scheme could be made even more eective by using adaptive regularization techniques; see, e.g., Hanke and Hansen [993] or Neumaier [997]. 3. CONFIDENCE REGIONS Let = xa x C

8 8 A. Neumaier and T. Schneider be the parameter vector containing the elements of the matrices A and C, where x A = A :. A :n C A 2 IR mn consists of the components of the matrix A 2 IR mn, arranged as a vector by stacking adjacent columns A :j of A. The symmetry of the covariance matrix implies that it would suce for to contain the elements in the upper triangle of C only, but for notational simplicity in what follows, let consist of all elements of C. In the following, we shall discuss the computation of condence intervals for a realvalued function = () of the parameters. For example, may stand for one of the entries in A or C, or, in a later section, for a component of an eigenvector of an AR() process' coecient matrix. Consider the propagation of errors? ^ in the parameter estimate ^ into the estimate ^ (^). (Here and henceforth, the superscript ^ refers to estimated quantities, i.e., functions of the parameters with the respective parameter estimates (6) and (7) substituted for A and C.) Linearizing about ^ leads to? ^? r ^ T (? ^), where r ^ denotes the gradient of () at = ^. Using this linearization, one can approximate the mean squared error 2 (? ^) 2 as where 2? r ^ T? r ^ ; (2) = (? ^)(? ^) T is the covariance matrix of the estimator ^. The relation (2) becomes exact when the residues? ^ follow a normal distribution because in this case their higher cumulants vanish. If in addition to the estimator being normal its covariance matrix is known, 2 is a 2 distributed random variable, and this implies that a condence level associated with the condence interval j? ^j (3) is given by the probability P that a realization of a 2 distributed random variable is smaller than = 2 2 : Thus, for a given condence level P, one may assert (3) with = p ; (4) where is given by (2); the parameter is the solution of P = 2 ; 2 where (; x) =?() Z x t? e?t dt (5)

9 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 9 is the incomplete gamma function. Application of these formulas requires an expression for the covariance matrix of the estimates. Under fairly general conditions on the noise in the AR model, it can be shown that, as the sample size approaches innity, the residues? ^ are asymptotically normal distributed with zero mean and covariance matrix (cf. Lutkepohl [993, Chapter 3.2.2]) where ^ = ^ A ^ C ; (6) ^ A = U? ^C and ^ C = 2 N? n ^C ^C: (7) Here, A B denotes the Kronecker product of the matrices A and B, and in the estimate ^ C again a degree of freedom adjustment was applied. Substituting this asymptotic estimate ^ for in (2) enables one to compute approximate condence intervals for ^ by means of (3){(7). For the special cases = A jk and = C jk, the gradient r is a column of the identity matrix. In this case, (2) implies that 2 is just the corresponding diagonal element of the covariance matrix ^. More explicitly, using (4) and the Kronecker product property (A B) ll = A kk B jj if l = m(k? ) + j; one nds the following formulas for the condence intervals of the estimated parameters: ja jk? ^Ajk j jc jk? ^Cjk j q q (^ A ) ll = (U? ) kk ^Cjj ; (8) q (^ C ) ll = r 2 N? n ^C kk ^Cjj : (9) Here the parameter is the solution of (5) for a given condence level P. The routine arconf in the ARfit package uses these expressions to compute condence intervals for the estimated parameters of an autoregressive process. 4. ORDER SELECTION In practice, the model order is frequently unknown and must be estimated from the data, too. After discussing various order selection criteria in the following subsection, we show how to evaluate them eciently in Section 4.2. In Section 4.3, we verify that our combined order selection and parameter estimation scheme works with essentially the same cost as required to estimate parameters in the xed order case. 4. Selection criteria For the computation of order selection criteria, we explicitly consider the dependence on the model order p and write p = (N? n) ^C = R T 22 R 22 ;

10 A. Neumaier and T. Schneider n p = n = mp + : Equations () for the least squares estimates then become? ^A = R? R T 2 ; ^C = p : (2) N? n p According to Lutkepohl [993, Chapter 4], who discusses in detail a number of order selection criteria, a useful order estimate is the value of p that minimizes Schwarz' Bayesian Criterion (SBC, see Schwarz [978]), dened by where 2 SBC(p) = log 2 + n p log N N ; det p N =m = (det p ) =m =N is a scalar measure for the (not bias corrected) variance of the estimator. (We divided Lutkepohl's expression for SBC by the constant dimension m of the state vectors.) Using the abbreviation l p = log det p ; one nds the equivalent expression SBC(p) = l p m?? n p log N: (2) N Another order estimate based on the expected one-step prediction error is the value of p that minimizes Akaike's [97] Final Prediction Error criterion FPE, given by FPE(p) = log 2 + log N + n p N? n p : (We divided the logarithm of the standard expression N+np m N?n p det p N for FPE by m.) In terms of l p, this leads to FPE(p) = l p m? log N(N? n p) N + n p : (22) It is known that SBC is a strongly consistent order selector [Lutkepohl 993]. SBC generally predicts the order much better than FPE which possesses no such consistency property. To improve upon the small sample properties of SBC, we tried a number of modications of SBC that behave asymptotically like SBC. As a result of extensive tests, we propose the Modied Schwarz Criterion (MSC), dened by MSC(p) = l p m?? 2:5n p (log N? 2:5): (23) N? n p Just as SBC, MSC still satises the strong consistency conditions given in Lutkepohl [993, Chapter 4]. Indeed, for long time series MSC behaves in practice essentially as SBC, but for short time series, MSC consistently gives a signicantly reduced fraction of wrong order estmates. (See Section 7.3 for an illustrative example.)

11 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 4.2 Ecient evaluation of selection criteria Order selection criteria such as SBC and MSC can be computed eciently for a sequence of AR(p) models with p p max once the QR factorization () is available for some p = p max. There is no need to carry out the QR factorization () for each p in the sequence; as we shall show, from the factorization for p = p max one can with small additional eort also obtain good parameter estimates for models of order p < p max. To nd formulas that realize these savings, note that the matrix K, dened in (9), can be written as K = (K K 2 ); K = u T.. u T N C A ; K 2 B v T.. v T N C A : When reducing some order p to the order p = p?, one works instead with K = (K K 2 ); where K is obtained from K by removing the last m columns. Thus K = (K K ); where K contains the removed columns. If one partitions accordingly R R = R R ; R 2 = R 2 R 2 by treating the last m rows and columns of R and the last m rows of R 2 separately, the QR factorization () becomes (K K K 2) = Q@ R R R2 R R 2 R 22 A : Dropping in this equation the m superuous columns we get with K = (K K 2) = R R 2 R 22 = R2 R 22 R 22 : R 22 A = Q R R 2 R 22 This is a factorization of the same kind as before, except that R 22 is no longer triangular, the latter, however, being of no relevance in the derivation of (2). Thus we have and writing p = R T 22 R 22 = R T 22 R 22 + R T 2 R 2 ; R p = R 2;

12 2 A. Neumaier and T. Schneider we nd the update formula Using the Sherman-Morrison formulas and p? = p + R T p R p:? det p + Rp p? T R = det p det I + R p? p RT p? p + R T p R p? =? p?? p RT p ;? I + Rp? p RT p Rp? p [Sherman and Morrison 949; Duncan 944], the update for the determinant term and the inverse l p = log det p M p =? p can be computed from a Cholesky factorization via I + R p M p R T p = L p L T p (24) l p? = l p + 2 log det L p ; (25) N p = L? p R pm p ; (26) M p? = M p? N T p N p: (27) After having determined an optimal order p opt, one nds corresponding least squares parameter estimates from (2) on replacing the maximally sized R and R 2 by their leading submatrices of size n popt. This algorithm, using formulas (2){(27), is implemented in the Matlab module arord. Though the above procedure is very convenient when estimating an optimum model order p opt by minimizing order selection criteria, it is not quite optimal: In the order p least squares estimates, p max? p initial data points are ignored. However, asymptotically this does not aect the quality of the results, and for very short time series where this might be a serious loss of information, one may repeat the least squares t with p max = p opt. 4.3 Operation count As K is a N 2mp max matrix, the work for the QR factorization in the above algorithm is of the order O(m 2 p 2 maxn). The updates (24){(27) can be done with work cubic in the dimension 2mp max of R. Since N mp max, this contributes at most work of the same order as that required for the QR factorization, and for N mp max the contribution of the updating process becomes negligible. Thus the total operation count is a small multiple of m 2 p 2 maxn, implying that our algorithm handles high-dimensional models and large data sets much more eciently than standard procedures, which compute order selection criteria for a sequence of AR models by a separate factorization for each model order.

13 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 3 5. SPECTRAL DECOMPOSITION OF AR PROCESSES We shall now proceed to show how to estimate condence intervals for spectral information computed from a tted autoregressive model. To this end, we rst introduce some notation by briey reviewing the spectral decomposition of AR() processes; a similar account can, for example, be found in Honerkamp [994, pp. 426.]. Then formulas for the computation of condence intervals are obtained, and, in Section 5.2, the results for rst order processes are generalized to autoregressive processes of arbitrary order. For notational simplicity, we restrict ourselves for the remainder of this paper to processes that are known to have zero mean, so that the intercept term w in () is exactly zero. 5. Spectral decomposition of AR() processes Let us consider the AR() process v = Av? + " ; " = noise(c); v 2 IR m : (28) Suppose that A is nondefective, so that a basis of (complex) eigenvectors exists, given by the columns S :k of a matrix S. Then S is nonsingular and whence AS :k = k S :k ; A = SS? ; = Diag( k ): (29) In this spectral basis, one can represent the state vectors v and the noise vectors " as linear combinations v = mx k= v (k) S :k = Sv and " = mx k= " (k) S :k = S" (3) with coecient vectors v = (v () ; : : : ; v (m) ) T and " = (" () ; : : : ; " (m) ) T. For these coecient vectors, one obtains from (28) the dynamics with v = v? + " ; " = noise(c ); (3) C = S? CS?y : (32) Writing the model (3) componentwise, it is evident that it represents a system of uncoupled processes with correlated noise v (k) = k v (k)? + "(k) (33) h" (k) "(l) i = Ckl (34) (where = for = and = otherwise). From (33) follows that the expectation value for arg k 6= describes a spiral hv (k) hv (k) i = k hv (k)? i +t i = t khv (k) i = e?t= k e(arg k)it hv (k) i

14 4 A. Neumaier and T. Schneider in the complex plane with damping time scale and period k?= log j k j (35) T k 2 j arg k j : (36) Here, arg z = Im(log z) denotes an argument of the complex number z, and we employ the convention? arg z, which ensures that a pair of complex conjugate eigenvalues is associated with a single period. For a stable process with nonsingular A, we have < j k j < for all k = ; : : : ; m, hence k is positive and bounded. If k has nonvanishing imaginary part or if k is real but negative, then T k is bounded, too, and (33) is called a stochastically driven (damped) oscillator. The period attains its minimum value T k = 2 when k is real and negative, that is, j arg k j = ; this is the period corresponding to the well-known Nyquist frequency from Fourier analysis. On the other hand, if k is real and positive, then T k! and (33) describes a stochastically driven relaxator. Thus, (3) analyzes the process (28) in terms of linear combinations of relaxators and damped oscillators with relaxation and oscillation modes S :k associated with damping time scales k and periods T k. Real spectral decomposition. Since the matrix A is real, its eigenvalues and eigenvectors, when scaled appropriately, come in complex conjugate pairs, and one can also obtain a decomposition of an autoregressive process in terms of real rather than complex modes. Indeed, for any complex eigenvalue k = a k + ib k ; the corresponding process (33) can be written as a real bivariate AR() process with noise vectors Re v (k) Im v (k) " (k) =! ak?b = k b k a k! Re "(k) Im " (k) = 2 Re v (k)? Im v (k)? " (k ) i(" (k )! + " (k)? " (k) ) + " (k) ; Here, the quantities indexed by k are conjugates of those indexed by k. From (34), using the Hermiticity of C, one now nds the noise covariance matrix h" (k) " (l) T = 4 = 2 i C kl + Ck l + C k l + C kl?i(?c kl + C k l? C k l + C kl )?i(ckl? C k l? C k l + C kl ) C kl + C k l? C k l? C kl Re C kl + Re Ck l Im Clk + Im C lk : Im C kl + Im C k l Re C kl? Re C k l The corresponding eigenmodes of this process are given by the real and imaginary parts of the eigenvector S :k.! :

15 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 5 State covariance matrix. The spectral representation (29) also provides a convenient means of computing the state covariance matrix = hv v T i of a stationary autoregressive process. Using (28) in the denition of the state covariance matrix, we have so that = h(av? + " )(Av? + " ) T i = Ahv? v T?iA T + h" " T i; = AA T + C: (37) Substituting for A from (29) and for C from (32), one nds = S S y (38) with a solution of = y +C. Since is diagonal, this reads in components (? k l ) kl = C kl : For k = l, this equation shows that, at least when C is positive denite (so that C is denite, too, and has positive diagonal entries), all eigenvalues satisfy j k j <. If this condition holds, the equations can be solved for kl and give C kl kl = : (39)? k l Moreover, (3) and (38) imply that (39) is the covariance matrix of the amplitudes, hv (k) v(l) i = kl : (4) Relative dynamical importance of modes. From (33) we infer that, if the columns of S are normalized to norm, the magnitudes of the (estimated) v (k) indicate the relative importance of the various relaxation and oscillation modes. One can thus dene the excitation k hjv (k) j2 i (4) as an index of the relative importance of the various modes. From the representations (39) and (4) of the state covariance matrix in the spectral basis, it follows that k = C kk? j k j 2 ; and this expression has intuitive appeal: It is the ratio of the forcing strength C kk over the damping?j k j 2 of mode k. In the process of interpreting a tted autoregressive model, the excitation k may provide a useful measure of the dynamical importance of the modes. Our suggestion of sorting modes according to their excitation contrasts with many studies (inter alios von Storch et al. [988], von Storch et al. [995], Penland and Sardeshmukh [995b]), in which the modes were ordered from least damped to most strongly damped, i.e., in order of decreasing j k j. The tradition of viewing the least damped modes as the dynamically most important ones is probably based on the fact that in unforced deterministic linear systems the least damped mode, if excited, in the limit of long times eventually dominates the dynamics. However, in the presence of stochastic forcing this need not be the case

16 6 A. Neumaier and T. Schneider as the weakly damped modes may not be suciently excited by the noise. The excitation k therefore appears to be a more appropriate measure of dynamical importance. Condence intervals. Computing condence intervals by means of (3){(7) requires formulas for the derivatives of the estimated eigenvalues, of the eigenvectors, and of the state covariance matrix with respect to parameters in A and C. For condence regions to be meaningful in the rst place, the quantities under consideration must be uniquely dened. In order to obtain a unique representation of an eigenvector S :k = X :k + iy :k with X = Re S and Y = Im S, we impose the normalization conditions X T :kx :k + Y T :k Y :k = ; X T :ky :k = ; Y T :k Y :k < X T :kx :k : (42) Dierentiating these normalization conditions and the eigenvector-eigenvalue relation leads to where the symbol AS :k = k S :k ; A _S :k + _AS :k = _ k S :k + k _S :k ; X T :k _ X :k + Y T :k _ Y :k = ; X T :k _ Y :k + Y T :k _ X :k = l denotes the derivative with respect to a component l of the parameter vector from Section 3. Using the spectral decomposition (29), these equations can be rearranged in the form (? k I)S? _S :k? e (k) _ k =?S? _AS :k ; (43) X:k T X _ :k + Y:k T Y _ :k = ; (44) X:k T Y _ :k + Y:k T X _ :k = ; (45) where e (k) is the kth column of an identity matrix. The kth component of (43) now gives _ k = (S? _AS) kk (46) as an explicit formula for the derivative of the eigenvalue. Writing the derivative of the eigenvector as the remaining components of (43){(45) take the form _S :k = SZ :k ; (47) ( j? k )Z jk =?(S? _AS) jk for j 6= k; (X T :kx + Y T :k Y ) Re Z :k + (Y T :k X? X T :ky ) Im Z :k = ; (X T :ky + Y T :k X) Re Z :k + (X T :kx? Y T :k Y ) Im Z :k = :

17 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 7 Assuming that all eigenvalues are distinct, these equations can be solved for Z and yield and Re Z kk = X l6=k Z jk = (S? _ AS) jk k? j for j 6= k; (48)? (X T Y? Y T X) kl Im Z lk? (X T X + Y T Y ) kl Re Z lk ; (49) Im Z kk = P? l6=k (Y T Y? X T X) kl Im Z lk? (X T Y + Y T X) kl Re Z lk (X T X? Y T : (5) Y ) kk (In deriving (49) and (5) we used the normalization conditions (42) in the form (X T X + Y T Y ) kk = and (X T Y ) kk =.) In case of multiple eigenvalues, equations (43){(45) cannot be solved uniquely for the derivative of the eigenmodes with respect to parameters in A and C. However, in this degenerate case the eigenvectors are no longer uniquely determined, and it is not meaningful to give condence regions for them. Writing the eigenvalues as k = a k +ib k with respective real and imaginary parts a k and b k, we deduce from (46) that _a k = Re(S? _AS) kk ; _ bk = Im(S? _AS) kk : As is easily veried, the derivatives of the damping time scales (35) and periods (36) then take the form and _ k = 2 k a k _a k + b k _ bk a 2 k + b2 k _ T k =? T 2 k 2 Im _ k k = T 2 k 2 (5) b k _a k? a k bk _ a 2 k + ; (52) b2 k respectively. For the calculation of the derivative of the stationary covariance matrix we use its representation (38) to nd where the derivatives _ jk = _ = _S S y + S _ S y + S _S y ; (53) _C jk (? j k ) + C jk ( _ j k + j _ k ) (? j k )2 (54) are obtained by dierentiation of (39). As above, SS? = I implies SS _? + S(S? ) =, hence and (S? ) =?S? _ SS? (S?y ) =?S?y _ S y S?y :

18 8 A. Neumaier and T. Schneider Thus, taking the derivative of (32) gives _C = S? ( _C? _SS? C? CS?y _S y )S?y : (55) Equations (46){(55) give easily programmable formulas for all derivatives required. Note that for the spectral representation of the noise covariance matrix, C = T?T? ;? = Diag( k ); one can invoke the same arguments as above to get condence intervals for the eigenvectors T :k and eigenvalues k. For the derivatives one nds again (46){(5) but with A replaced by C, replaced by?, and S replaced by T. Since C is symmetric, both? and T are real. 5.2 Spectral decomposition of processes of arbitrary order To generalize the results for the AR() case to processes of arbitrary order, we utilize the fact that an AR(p) process v = px l= A l v?l + " ; " = noise(c); with v 2 IR m can be represented as an AR() process with augmented state and noise vectors ~v = and coecient matrix ~A = v v?. v?p+ ~v = ~ A~v? + ~" C A ; ~" = ". C A 2 IRmp ; A A 2 A p? A p I I... C A 2 IRmpmp : I Assuming that the coecient matrix ~ A of this AR() process is nondefective, one can again represent ~ A as ~ A = ~ S ~ S? ; = Diag(k); where the columns S:k ~ of S ~ are the mp-dimensional (complex) eigenvectors of A. ~ As above, for the coecients ~v (k) and ~" (k) in the expansions of the augmented state vectors ~v = Xmp k= ~v (k) ~S :k (56)

19 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 9 and of the noise vectors ~" = one obtains the decoupled dynamics with correlated noise where and ~v (k) h" (k) Xmp k= ~" (k) ~S :k (57) = k ~v (k)? + ~"(k) ; (58) "(l) i = C kl C = ~ S? ~ C ~ S?y C ~C = h~" ~" y i = 2 IR mpmp : Since the eigenvectors ~ S:k of the matrix ~ A satisfy ~S :k = p? k S :k. k S :k S :k where S :k is an m-dimensional (complex) vector, one nds from (56) that and similarly from (57) Introducing v?p+j+ = v (k) Xmp k= p? k " = C A ~v (k) j k S :k; j = ; : : : ; p? ; Xmp k= ~" (k) p? k S :k : ~v (k) and " (k) p? k ~" (k) ; one can now represent the original, m-dimensional state vectors v and noise vectors " as linear combinations v = Xmp k= X mp v (k) S :k and " = k= " (k) S :k: From (58) we conclude that the dynamics of the coecients v (k) system of uncoupled processes with correlated noise v (k) = k v (k)? + "(k) ; k = ; : : : ; mp h" (k) " (l) i = ( k l ) p? Ckl: are governed by a

20 2 A. Neumaier and T. Schneider As in the rst order case, each eigenmode is associated with a damping time scale (35) and a period (36); the excitation k = hjv (k) j 2 i = j k j 2(p?) hj~v (k) j 2 i of an eigenmode is obtained in a similar fashion as in the rst order case. Thus, the only dierence to the AR() case is that the AR(p) process has a larger number of eigenmodes; they still span the space but are no longer linearly independent. (This is analogous to what happens in higher order linear dierential equations with constant coecients.) Condence intervals. Imposing the normalization conditions ~X T :k ~ X :k + ~ Y T :k ~ Y :k = ; ~ X T :k ~ Y :k = ; ~ Y T :k ~ Y :k < (59) on the eigenvectors ~ S:k = ~ X:k + i ~ Y:k of ~ A, the derivative formulas (46){(52) apply with S replaced by ~ S and A by ~ A. This allows one to compute condence intervals by means of (3){(7). The module armode is a straightforward implementation of the procedures laid out in this section. For AR models of arbitrary order, it performs the estimation of eigenmodes, of periods, of damping time scales, and of condence intervals for the estimated spectral information. 6. PARAMETER ESTIMATES FOR LINEAR CONSTANT COEFFICIENT LANGEVIN EQUATIONS The multivariate, zero-mean Ornstein-Uhlenbeck process v(t) is dened by the Langevin equation d dt v(t) = A v(t) + "(t); "(t) WN(; C ); (6) where "(t) is a Gaussian white noise process with zero mean and covariance matrix C (see, e.g., Gardiner [983]). We consider sampling from a stable Ornstein-Uhlenbeck process v(t) at discrete steps t = h ( = ; ; : : :). In the following, we shall show that the resulting time series is a realization of an AR() process, and this will allow us to use the machinery developed for the identication of AR() processes to estimate the parameter matrices A and C appearing in the Langevin equation above. Theorem. A discrete sample v = v(h) (6) from an Ornstein-Uhlenbeck process v(t), dened by (6), forms an AR() process with coecient matrix v + = Av + " ; " = noise(c); (62) A = e A h and a constant noise covariance matrix C. Proof. Inserting t = ( + )h and t = h into the integral equation v(t) = e A (t?t ) v(t ) + Z t t (63) dt e A (t?t ) "(t ) (64)

21 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 2 that is equivalent to (6), yields, after a change of the integration variable, or, using (6), Here, the noise terms v(( + )h) = e A h v(h) + " = Z h dt e A t "? ( + )h? t ; v + = e A h v + " : (65) Z h dt e A t "? ( + )h? t are independent Gaussian noise vectors with covariance matrix C = h" " T i = Z h dt Z h which simplies to the constant matrix dt e A t h"? ( + )h? t " T? ( + )h? t ie AT t ; C = Z h Comparison of (65) with (62) now completes the proof. Rewriting (63) in the equivalent form dt e A t C e AT t : (66) A = h? log A; (67) one recognizes that the eigenvectors of A and A are the same and the eigenvalues k of A and k of A are related by Therefore, if as in Section 5 then k = h? log k : (68) A = SS? ; = Diag( k ); A = S(h? log )S? : (69) As in Section 5, one can associate a damping time scale k and a period T k with each eigenmode S :k. For the Ornstein-Uhlenbeck process these read and k =? Re k T k = 2 Im k ; respectively. Substitution of (68) and comparison with (35) and (36) yields T k = ht k ; k = h k : Thus, damping time scales and periods for the Ornstein-Uhlenbeck and corresponding AR() process dier only by the scaling factor h a result that could have been anticipated by physical reasoning.

22 22 A. Neumaier and T. Schneider A computationally more convenient relation between C and C than that given by (66) is obtained using the fact that the Ornstein-Uhlenbeck process and its corresponding AR() process must have the same stationary process covariance matrix. In order to nd a relation between the stationary process covariance matrix and the noise covariance matrix, we calculate the dynamics of the process covariance matrix (t) = hv(t)v(t) T i? hv(t)ihv(t)i T : Since the process is non-anticipating, i.e., hv(t )" T (t)i = for all t t, one nds from (64) (t) = e A (t?t ) (t )e AT (t?t ) + Dierentiation with respect to t yields Z t t dt e A (t?t ) C e AT (t?t) : d(t) = (t)a T + A (t) + C : dt In the limit t!, the process covariance matrix converges to the stationary covariance matrix (), and its derivative converges to zero. This gives the wellknown uctuation-dissipation theorem In Section 5, we showed that the stationary limit C =?()A T? A (): (7) = (); of the discretized process covariance matrices (h) for! can be computed from (38). It follows that after having identied the parameters of the AR() process one can nd its stationary covariance matrix and then use the uctuation-dissipation theorem (7) to get the covariance matrix of the white noise term in the corresponding Langevin equation (6). Thus, using also (38) and (69), one obtains C =?(A + A T ) =?h? S?? (log ) + (log ) y S y ; (7) which is easily computed once the spectral decomposition of the corresponding AR() process is available. Condence intervals. As in Section 5, the derivatives of A and C with respect to parameters in A and C are needed in order to compute condence intervals by means of (3){(7). From (67), one sees that the derivative of A is _A = h?? S(log )S? = h? ( _S log + S? _? S(log )S? _S S? ; (72) where the derivative _S of the eigenmodes is given by (47){(5), and the derivative of the eigenvalues, _ = Diag( _ k );

23 AR and Ornstein-Uhlenbeck Processes: Estimates and condence regions 23 follows from (46). Similarly, the derivative of the noise covariance matrix becomes _C =?( _A + _A T + A _ + _A T ) (73) with _A from (72) and _ from (53). Since the eigenmodes of the AR() process and the Ornstein-Uhlenbeck process are the same, so are the condence intervals for the estimated modes. For the eigenvalues we conclude from (68) that _ k = h? _ k = k with _ k again given by (46). Finally, one obtains condence intervals for the damping time scales and periods associated with the modes of the Ornstein-Uhlenbeck process by scaling the damping times scales and periods of the corresponding AR() process with the sampling step size h. Simulation of Ornstein-Uhlenbeck processes. Stability of a direct numerical simulation of the Langevin equation (6) often requires time steps that are much smaller than the desired sampling step size of the Ornstein-Uhlenbeck process. (See, e.g., Kloeden and Platen [992] for numerical methods for stochastic dierential equations.) In this case, the fact that a discrete sample from an Ornstein-Uhlenbeck process forms an AR() process is also an aid in the numerical simulation of Ornstein- Uhlenbeck processes. Instead of using expensive schemes to integrate the Langevin equation (6) directly, one simply simulates the corresponding AR() process (62) with coecient matrix A from (63) and noise covariance matrix (cf. (37)) C =? AA T ; (74) with = () satisfying the uctuation-dissipation theorem (7). The noise covariance matrix C can be computed by rst substituting A = SS? ; = Diag( k ) (75) into the uctuation-dissipation theorem (7) and then multiplying by S? from the left and by S?y from the right. Introducing and the uctuation-dissipation theorem (7) becomes C = S? C S?y (76) = S? S?y ; (77) C =? y? y ; which, using the Hermiticity of, reads in components Solving for kl yields (C ) kl =? kl l? k kl : kl =? (C ) kl k + l : (78) Now, computing (75), (76), (78), solving for the state covariance matrix from (77), and nally inserting the so obtained into (74), yields the needed noise

24 24 A. Neumaier and T. Schneider covariance matrix C of the AR() process (62) corresponding to the Ornstein- Uhlenbeck process (6). As the desired length of the integration and the sampling step size h increase, this approach of simulating a sample from an Ornstein-Uhlenbeck process becomes more and more ecient compared to integrating the Langevin equation directly. 7. NUMERICAL EXAMPLES Consideration of asymptotic properties forms the theoretical basis for the least squares method and other estimation procedures such as the Yule-Walker scheme and Burg's algorithm. The practitioner, however, usually has only a limited data set available. As analytical results for nite samples are dicult to obtain, we shall now study the small-sample performance of the procedures described in the previous sections resorting to Monte-Carlo experiments with time series of various length. In the following subsection, a bivariate process serves to illustrate both the consistency of the least squares estimates and the small-sample properties of the estimates for the parameters and derived spectral information. The subsequent Section 7.2 contains a performance comparison of the least squares procedure with other techniques in estimating a univariate AR(3) process. Finally, in Section 7.3, the performance of various order selection criteria is compared on a variety of simulated autoregressive processes. To ease the presentation of the results, we restricted ourselves in all these examples to low-dimensional data sets. However, we emphasize that the algorithms described above are ecient enough to handle large amounts of high-dimensional data. 7. A bivariate example We generated time series data by simulation of the bivariate AR(2) process v = A v? + A 2 v?2 + " ; " = WN(; C); = ; : : : ; N (79) with parameters A = :4 :2 :3 :7 ; A 2 = :35?:3 ; C =?:4?:5 : :5 : (8) :5 :5 Again, only processes with zero mean are considered, so that the intercept vector w in () is identically zero; it follows that the coecient matrix A and the predictor u in (3) and (4) must be replaced by and u = A = (A A p ) v?. v?p C A 2 IR n ; n = mp; respectively. We used the QR factorization technique from Section 2.2 (as implemented in ar) to estimate the parameters = fa ; A 2 ; Cg from realizations of the AR(2) processes (79) for various values of N. For all parameters j the approximate 68:3% condence intervals j j? ^ j j ( j ) were computed using the formulas (8) and

2 T. Schneider and A. Neumaier m-dimensional random vectors with zero mean and covariance matrix C. The m- dimensional parameter vector w of intercept

2 T. Schneider and A. Neumaier m-dimensional random vectors with zero mean and covariance matrix C. The m- dimensional parameter vector w of intercept Algorithm: ARfit A Matlab Package for Estimation and Spectral Decomposition of Multivariate Autoregressive Models Tapio Schneider Princeton University and Arnold Neumaier Universitat Wien ARfit is a collection

More information

Estimation of parameters and eigenmodes of multivariate autoregressive models

Estimation of parameters and eigenmodes of multivariate autoregressive models Estimation of parameters and eigenmodes of multivariate autoregressive models ARNOLD NEUMAIER Universität Wien and TAPIO SCHNEIDER New York University Dynamical characteristics of a complex system can

More information

Algorithm 808: ARfit A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive models

Algorithm 808: ARfit A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive models Algorithm 808: ARfit A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive models TAPIO SCHNEIDER New York University and ARNOLD NEUMAIER Universität Wien ARfit

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

data lam=36.9 lam=6.69 lam=4.18 lam=2.92 lam=2.21 time max wavelength modulus of max wavelength cycle

data lam=36.9 lam=6.69 lam=4.18 lam=2.92 lam=2.21 time max wavelength modulus of max wavelength cycle AUTOREGRESSIVE LINEAR MODELS AR(1) MODELS The zero-mean AR(1) model x t = x t,1 + t is a linear regression of the current value of the time series on the previous value. For > 0 it generates positively

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Estimating Moving Average Processes with an improved version of Durbin s Method

Estimating Moving Average Processes with an improved version of Durbin s Method Estimating Moving Average Processes with an improved version of Durbin s Method Maximilian Ludwig this version: June 7, 4, initial version: April, 3 arxiv:347956v [statme] 6 Jun 4 Abstract This paper provides

More information

5 Eigenvalues and Diagonalization

5 Eigenvalues and Diagonalization Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

In manycomputationaleconomicapplications, one must compute thede nite n

In manycomputationaleconomicapplications, one must compute thede nite n Chapter 6 Numerical Integration In manycomputationaleconomicapplications, one must compute thede nite n integral of a real-valued function f de ned on some interval I of

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho Model Reduction from an H 1 /LMI perspective A. Helmersson Department of Electrical Engineering Linkoping University S-581 8 Linkoping, Sweden tel: +6 1 816 fax: +6 1 86 email: andersh@isy.liu.se September

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Connexions module: m11446 1 Maximum Likelihood Estimation Clayton Scott Robert Nowak This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract

More information

An exploration of matrix equilibration

An exploration of matrix equilibration An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,

More information

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112 Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS

More information

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE UDREA PÄUN Communicated by Marius Iosifescu The main contribution of this work is the unication, by G method using Markov chains, therefore, a

More information

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract Published in: Advances in Neural Information Processing Systems 8, D S Touretzky, M C Mozer, and M E Hasselmo (eds.), MIT Press, Cambridge, MA, pages 190-196, 1996. Learning with Ensembles: How over-tting

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

SYSTEM RECONSTRUCTION FROM SELECTED HOS REGIONS. Haralambos Pozidis and Athina P. Petropulu. Drexel University, Philadelphia, PA 19104

SYSTEM RECONSTRUCTION FROM SELECTED HOS REGIONS. Haralambos Pozidis and Athina P. Petropulu. Drexel University, Philadelphia, PA 19104 SYSTEM RECOSTRUCTIO FROM SELECTED HOS REGIOS Haralambos Pozidis and Athina P. Petropulu Electrical and Computer Engineering Department Drexel University, Philadelphia, PA 94 Tel. (25) 895-2358 Fax. (25)

More information

Inversion Base Height. Daggot Pressure Gradient Visibility (miles)

Inversion Base Height. Daggot Pressure Gradient Visibility (miles) Stanford University June 2, 1998 Bayesian Backtting: 1 Bayesian Backtting Trevor Hastie Stanford University Rob Tibshirani University of Toronto Email: trevor@stat.stanford.edu Ftp: stat.stanford.edu:

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

NORTHERN ILLINOIS UNIVERSITY

NORTHERN ILLINOIS UNIVERSITY ABSTRACT Name: Santosh Kumar Mohanty Department: Mathematical Sciences Title: Ecient Algorithms for Eigenspace Decompositions of Toeplitz Matrices Major: Mathematical Sciences Degree: Doctor of Philosophy

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

Package mar. R topics documented: February 20, Title Multivariate AutoRegressive analysis Version Author S. M. Barbosa

Package mar. R topics documented: February 20, Title Multivariate AutoRegressive analysis Version Author S. M. Barbosa Title Multivariate AutoRegressive analysis Version 1.1-2 Author S. M. Barbosa Package mar February 20, 2015 R functions for multivariate autoregressive analysis Depends MASS Maintainer S. M. Barbosa

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel The Bias-Variance dilemma of the Monte Carlo method Zlochin Mark 1 and Yoram Baram 1 Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel fzmark,baramg@cs.technion.ac.il Abstract.

More information

Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra I Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n

More information

Peter Deuhard. for Symmetric Indenite Linear Systems

Peter Deuhard. for Symmetric Indenite Linear Systems Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993) Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7

More information

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1) Chapter 11. 11.0 Introduction Eigensystems An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if A x = λx (11.0.1) Obviously any multiple of an eigenvector x will also be an

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables, University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS

UNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS 2-7 UNIVERSITY OF LIFORNI, SN DIEGO DEPRTMENT OF EONOMIS THE JOHNSEN-GRNGER REPRESENTTION THEOREM: N EXPLIIT EXPRESSION FOR I() PROESSES Y PETER REINHRD HNSEN DISUSSION PPER 2-7 JULY 2 The Johansen-Granger

More information

Math 423/533: The Main Theoretical Topics

Math 423/533: The Main Theoretical Topics Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)

More information

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems Dierential Equations (part 3): Systems of First-Order Dierential Equations (by Evan Dummit, 26, v 2) Contents 6 Systems of First-Order Linear Dierential Equations 6 General Theory of (First-Order) Linear

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;

More information

Complexity of the Havas, Majewski, Matthews LLL. Mathematisch Instituut, Universiteit Utrecht. P.O. Box

Complexity of the Havas, Majewski, Matthews LLL. Mathematisch Instituut, Universiteit Utrecht. P.O. Box J. Symbolic Computation (2000) 11, 1{000 Complexity of the Havas, Majewski, Matthews LLL Hermite Normal Form algorithm WILBERD VAN DER KALLEN Mathematisch Instituut, Universiteit Utrecht P.O. Box 80.010

More information

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m. Difference equations Definitions: A difference equation takes the general form x t fx t 1,x t 2, defining the current value of a variable x as a function of previously generated values. A finite order

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems The Evil of Supereciency P. Stoica B. Ottersten To appear as a Fast Communication in Signal Processing IR-S3-SB-9633 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems Signal Processing

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir

Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir Order Results for Mono-Implicit Runge-Kutta Methods K urrage, F H hipman, and P H Muir bstract The mono-implicit Runge-Kutta methods are a subclass of the wellknown family of implicit Runge-Kutta methods

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Managing Uncertainty

Managing Uncertainty Managing Uncertainty Bayesian Linear Regression and Kalman Filter December 4, 2017 Objectives The goal of this lab is multiple: 1. First it is a reminder of some central elementary notions of Bayesian

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series

User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series Hannes Helgason, Vladas Pipiras, and Patrice Abry June 2, 2011 Contents 1 Organization

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

1. Introduction Let f(x), x 2 R d, be a real function of d variables, and let the values f(x i ), i = 1; 2; : : : ; n, be given, where the points x i,

1. Introduction Let f(x), x 2 R d, be a real function of d variables, and let the values f(x i ), i = 1; 2; : : : ; n, be given, where the points x i, DAMTP 2001/NA11 Radial basis function methods for interpolation to functions of many variables 1 M.J.D. Powell Abstract: A review of interpolation to values of a function f(x), x 2 R d, by radial basis

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Reduced rank regression in cointegrated models

Reduced rank regression in cointegrated models Journal of Econometrics 06 (2002) 203 26 www.elsevier.com/locate/econbase Reduced rank regression in cointegrated models.w. Anderson Department of Statistics, Stanford University, Stanford, CA 94305-4065,

More information

R = µ + Bf Arbitrage Pricing Model, APM

R = µ + Bf Arbitrage Pricing Model, APM 4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools Joan Llull Microeconometrics IDEA PhD Program Maximum Likelihood Chapter 1. A Brief Review of Maximum Likelihood, GMM, and Numerical

More information

Multivariate autoregressive models as tools for UT1-UTC predictions

Multivariate autoregressive models as tools for UT1-UTC predictions Multivariate autoregressive models as tools for UT1-UTC predictions Tomasz Niedzielski 1,2, Wiesław Kosek 1 1 Space Research Centre, Polish Academy of Sciences, Poland 2 Oceanlab, University of Aberdeen,

More information

INRIA Rh^one-Alpes. Abstract. Friedman (1989) has proposed a regularization technique (RDA) of discriminant analysis

INRIA Rh^one-Alpes. Abstract. Friedman (1989) has proposed a regularization technique (RDA) of discriminant analysis Regularized Gaussian Discriminant Analysis through Eigenvalue Decomposition Halima Bensmail Universite Paris 6 Gilles Celeux INRIA Rh^one-Alpes Abstract Friedman (1989) has proposed a regularization technique

More information

OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS

OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS Stochastics and Dynamics, Vol. 2, No. 3 (22 395 42 c World Scientific Publishing Company OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS Stoch. Dyn. 22.2:395-42. Downloaded from www.worldscientific.com by HARVARD

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR SPEECH CODING Petr Polla & Pavel Sova Czech Technical University of Prague CVUT FEL K, 66 7 Praha 6, Czech Republic E-mail: polla@noel.feld.cvut.cz Abstract

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Analysis of Incomplete Climate Data: Estimation of Mean Values and Covariance Matrices and Imputation of Missing Values

Analysis of Incomplete Climate Data: Estimation of Mean Values and Covariance Matrices and Imputation of Missing Values 1MARCH 001 SCHNEIDER 853 Analysis of Incomplete Climate Data: Estimation of Mean Values and Covariance Matrices and Imputation of Missing Values APIO SCHNEIDER Atmospheric and Oceanic Sciences Program,

More information

DYNAMIC AND COMPROMISE FACTOR ANALYSIS

DYNAMIC AND COMPROMISE FACTOR ANALYSIS DYNAMIC AND COMPROMISE FACTOR ANALYSIS Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu Many parts are joint work with Gy. Michaletzky, Loránd Eötvös University and G. Tusnády,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Latin Hypercube Sampling with Multidimensional Uniformity

Latin Hypercube Sampling with Multidimensional Uniformity Latin Hypercube Sampling with Multidimensional Uniformity Jared L. Deutsch and Clayton V. Deutsch Complex geostatistical models can only be realized a limited number of times due to large computational

More information