HIGHER ORDER CUMULANTS OF RANDOM VECTORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES

Size: px
Start display at page:

Download "HIGHER ORDER CUMULANTS OF RANDOM VECTORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES"

Transcription

1 HIGHER ORDER CUMULANTS OF RANDOM VECTORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK Abstract. This paper provides a uni ed and comprehensive approach that is useful in deriving expressions for higher order cumulants of random vectors. The use of this methodology is then illustrated in three diverse and novel contexts, namely (i) in obtaining a lower bound (Bhattacharya bound) for the variancecovariance matrix of a vector of unbiased estimators where the density depends on several parameters, (ii) in studying the asymptotic theory of multivariable statistics when the population is not necessarily Gaussian and nally, (iii) in the study of multivariate nonlinear time series models and in obtaining higher order cumulant spectra. The approach depends on expanding the characteristic functions and cumulant generating functions in terms of the Kronecker products of di erential operators. Our objective here is to derive such expressions using only elementary calculus of several variables and also to highlight some important applications in statistics.. Introduction and Review It is well known that cumulants of order greater than two are zero for random variables which are Gaussian. In view of this, higher order cumulants are often used in testing for Gaussianity and multivariate Gaussianity as well as to prove classical limit theorems. These are also used in asymptotic theory of statistics, such as in Edgeworth series expansions. Consider a scalar random variable X and let us assume that all its moments, j = E(X j ) j =, exist. Let the characteristic function of X be denoted by ' X () and it then has the series expansion given by X (.) ' X () = E(e ix (i) j ) = + j R j! From (.), we observe ( i) j d j '()=d j = = j In other words, the j th derivative of the Taylor series expansion of ' X () evaluated at = gives the j th moment. The cumulant generating function, X () is de ned as (see eg. Leonov and Shiryaev(959)) X (i) j (.) X () = ln ' X () = j j! where j is called as the j th cumulant of the random variable X. As before, we see j = ( i) j d j X ()=d j = Comparing (.) and (.), one can write the cumulants in terms of moments and vice versa. For example, = = ( ) etc.. Now suppose the random variable X is normal with mean and variance, then we know ' X () = exp(i =) which implies j = for all j 3 We now consider generalizing the above results to the case when X is a d dimensional random vector. The de nition of the joint moments and the cumulants of the random vector X requires a Taylor series expansion of a 99 Mathematics Subject Classi cation. Primary 6E7, 6E Secondary 6H, 6E5. Key words and phrases. Cumulants for multivariate variables, Cumulants for likelihood functions, Bhattacharya lower bound, Taylor series expansion, Multivariate time series. This research of Gy. Terdik is partially supported by the Hungarian NSF OTKA No. T4767, and the NATO fellowship 38/. This paper is in nal form and no version of it will be submitted for publication elsewhere. j= j=

2 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK function in several variables and also its partial derivatives in these variables and they are similar to (.) and (.). Though these expansions may be considered straightforward generalizations, the methodology and the mathematical notation gets quite cumbersome when dealing with the derivatives of characteristic functions and the cumulant generating functions of random vectors. However we need such expansions in studying the asymptotic theory in classical multivariate analysis as well as in multivariate nonlinear time series, (see Subba Rao and Wong (999)). A uni ed and streamlined methodology for obtaining such expressions is desirable, and that is what we attempt to do here. As an example consider a random sample (X X X n ) from a multivariate normal distribution with mean vector and variance covariance matrix. We know that the sample mean vector X has a multivariate normal distribution and the sample variance covariance matrix has a Wishart distribution, and they are independent. However, when the random sample is not from a multivariate normal distribution, one approach to obtaining such distributions is through the multivariate Edgeworth expansion, whose evaluation requires expressions for higher order cumulants of random vectors. Further applications of these, in the time-series context, can be found in the books of Brillinger (), Terdik (999) and the recent papers of Subba Rao and Wong (999) and Wong (997). Similar results can be found in the works of McCullagh (987) and Speed (99), but the techniques we use here are quite di erent and require only knowledge of calculus of several variables instead of Tensor calculus. Also we believe this to be a more transparent and streamlined approach. Finally, we derive several new results of interest in statistical inference and time series, using these methods. We derive Yule-Walker type di erence equations in terms of higher order cumulants for stationary multivariate linear processes. Also derived are expressions for higher order cumulant spectra of such processes, which turn out to be useful in constructing statistical tests for linearity and Gaussianity of multivariate time series. The Information inequality or the Cramer-Rao lower bound for the variance of an unbiased estimator is well known for both single parameter and multiple parameter cases. A more accurate series of bounds for the single parameter case, are given by Bhattacharya (946) and they depend on all higher order derivatives of the log-likelihood function. Here we give a generalization of this bound for the multiparameter case, based on partial derivatives of various orders. We illustrate this with an example where we nd a lower bound for the variance of an unbiased estimator of a nonlinear function of the parameters. In Section, we de ne the cumulants of several random vectors. In Section 3 we consider applications of the above methods to statistical inference. We de ne multivariate measures of skewness and kurtosis and consider multivariate time series. We also obtain properties of the cumulants of the partial derivatives of log-likelihood function of a random sample (X X X n ) drawn from a distribution F (x). In this section we use expressions derived for the partial derivatives to obtain Bhattacharya-type bound, and illustrate it with an example. In the Appendix we derive the properties of di erential operators which are used in obtaining expressions for the partial derivatives of functions of several vectors. Lastly we express Taylor series of such functions in terms of di erential operators.. Moments and cumulantsof random vectors.. Characteristic function and moments of random vectors. Let X be a d dimensional random vector and let X = X X where X is of dimension d and X is of dimension d such that d = d +d Let =. The characteristic function of X is given by ' ( ) = E exp i X + X = = X kl= X kl= i k+l k!l! E X k X l i k+l k!l! E X k X l k l

3 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS 3 Here the coe cients of k l can be obtained by using the K derivative and the formula (4.6). Consider the second K derivative of ' D () ' ( ) = D D ' ( ) = ' ( ) X i k+l = EXk k l X l (X (k )! (l )! X ) kl= Now by evaluating the derivative D () ' ( ) we obtain EX X, similarly other moments kl= = can be obtained from higher order derivatives. Therefore, the Taylor series expansion of ' ( ) can be written in terms of derivatives and is given by. X i k+l ' ( ) = D (kl) k!l! ' ( ) k l = We note that in general D (kl) ' ( ) = = ik+l EX k X l 6= i k+l EX l Xk = D (lk) ' ( ) = which shows that the partial derivatives in this case are not symmetric. Consider a set of vectors (n) = n with dimensions [d d d n ]. We can de ne the operator D () given in the Appendix for the partitioned set of vectors (n) This is achieved recursively. Recall that the K derivative with respect j is D j ' = Vec! ' j De nition. The n th derivative D n (n) is de ned recursively by D n D n (n) ' = D n where D n (n) ' is a column vector of the partial di erential operator of order n. We see this is the rst order derivative of the function which is already a (n ) th order partial derivative. The dimension of D n (n) is d [n] n = Q n j= d j where [n] denotes a row vector having all ones as its entries, i.e., [n] = [ ] with dimension n. The order of the vectors in (n) is important. The following de nition generalizes to the multivariate case, a similar well-known result for scalar valued random variables. Here we assume the partial derivatives exist. De nition. Suppose X (n) = (X X X n ) is a collection of random (column) vectors with dimensions [d d d n ]. The Kronecker moment is de ned by the following K derivative E (X X X n ) = E Y n X j= j = ( i)n D n ' X X n X n ( n ) (n) = We note, the order of the product in the expectations and the derivatives are important, since the Kronecker moment is not symmetric if the variables X X X n are di erent. (n ) '

4 4 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK.. Cumulant function and Cumulants of random vectors. We obtain the cumulant Cum n (X) as the derivative of the logarithm of the characteristic function ' X () of X = (X X X n ) and then evaluate the function at zero to obtain. ( i) n n ln ' X () [n] where [n] = n See Terdik (999) for details. Now consider the collection of random vectors = Cum n (X) = Cum n (X X X n ) (n) = X (n) = (X X X n ) where each X i is of order d i. The corresponding characteristic function of Vec X (n) is ' X(n) (n) = 'Vec X(n) Vec (n) = E exp i Vec (n) Vec X (n) where (n) = ( n ) and d (n) = (d d d n ). We call the logarithm of the characteristic function ' Vec X(n) Vec (n) as the cumulant function and denote it by We write X(n) X (n) D n (n) Vec X (n) Vec (n) = ln 'Vec X(n) Vec (n) (n) for Vec X(n) Vec (n). The rst order K derivative of the cumulant function (n) with respect to to (n) is de ned as the cumulant of X (n) Now we use the operator = D D n n (n ) recursively and the result is a column vector of the partial di erentials of order n which is rst order in each variable j. The dimension of D n (n) is d [n] n = Q n j= d j Now we de ne the n th order cumulant of vectors X (n) as De nition 3. (.) Cum n X (n) = ( i) n D n (n) X (n) (n) (n) = Therefore Cum n X (n) is a vector of dimension d [n] n containing all possible cumulants of the elements formed from the vectors X X X n and the order is de ned by the Kronecker products de ned earlier (see also Terdik, ). This de nition also includes the evaluation of the cumulants where all the random vectors X X X n need not to be distinct. In this case the characteristic function depends on the sum of the corresponding variables of (n) and we use still the de nition (.) to obtain the cumulant. For example, when n = we have Cum (X) = EX and when n = (.) Cum (X X ) = E [(X EX ) (X EX )] = Vec Cov(X X ) where Cov(X X ) denotes the covariance matrix of the vectors X and X. formulae, let us consider an example. To illustrate the above X X, and assume X() has a joint normal distribution with moment Example. Let X () = and the variance covariance matrix C (X X ) given by C (X X ) = C C C C

5 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS 5 Then the characteristic function of X () is given by ' X() ( ) = exp i + C + C + C + C and the cumulant function of X () is X () ( ) = ln ' X() ( ) = i( + ) C + C + C + C Now the rst order cumulant is Cum X j = id ' X() ( j ) = j = = and it is clear that any cumulant of order higher than two is zero. One can easily show that the second order cumulants are the vectors of the covariance matrices, i.e. For instance if j = and k = If j = k =, then, and by applying repeatedly D Cum X j X k = Vec Ckj j k = D C = D D C = Vec C we obtain D C = Vec C = C Cum (X X ) = D D C = = Vec C.3. Basic Properties of the Cumulants. For convenience of notation, we set the dimensions of X X X n equal to d The cumulants are symmetric in scalar valued case, but not in vectors, for example Cum (X X ) 6= Cum (X X ). Here we have to use permutation matrices (see Appendix for details) as will be shown below. Proposition. Let p be a permutation of integers ( n) and let the function f (n) R d be continuously di erentiable n times in all its arguments, then D n p(n) f = I d K p(n) (d n ) D n (n) f () Symmetry. If d > then the cumulants are not symmetric but satisfy the relation Cum n (X (n) ) = K p(n) d [n] Cumn (X p(n) ) where p ( n) = (p () p () p (n)) belongs to the set of all possible permutations P n of the numbers ( n), d [n] = (d d d) {z } and K p(n) d [n] is the permutation matrix (see Appendix, n equation 4.). For constant matrices A and B and random vectors Y, Y Cum n+ (AY + BY X (n) ) = (A I d n) Cum n+ (Y X (n) ) + (B I d n) Cum n+ (Y X (n) ) also Cum n+ (AY BY X (n) ) = (A B I d n) Cum n+ (Y Y X (n) ) assuming that the appropriate matrix operations are valid.

6 6 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK For any constant vectors a and b Cum n+ (a Y + b Y X (n) ) = a Cum n+ (Y X (n) ) + b Cum n+ (Y X (n) ) () Independence. If X (n) is independent of Y (m) where n m > then Cum n+m (X (n) Y (m) ) = In particular if the dimensions are same, then Cum n (X (n) + Y (n) ) = Cum n (X (n) ) + Cum n (Y (n) ) (3) Gaussianity. The random vector X (n) is Gaussian if and only if for all subsets k (m) of ( n) is zero i.e. Cum m (X k(m) ) = m > For further properties of the cumulants, we need the following Lemma which makes it easier to understand the relations between the moments and the cumulants, (see Barndor -Nielsen and Cox, (989), p. 4). Remark. Let P n for the set of all partitions K of the integers ( n). If K = fb b b m g where each b j ( n) then jkj = m denotes the size of K We introduce an ordering among the blocks b j K, b j b k if X X (.3) l l lb j lb k and equality in (.3) is possible if and only if j = k The partition K will be considered as ordered if both, the elements of a block are ordered inside the block, and the blocks are ordered by the above relation b j b k also. We suppose that all partitions K of P n are ordered. Denote = (M) = M R N where j R dj and N = d [n] n. In this case the di erential operator Djbj b is well de ned because the vector b = j j b denotes an ordered subset of vectors M corresponding to the order in b. The permutation p (K) of the numbers ( n) corresponds to the ordered partition K.(See Andrews, 976, for more details on partitions). We can rewrite the formula of Faà di Bruno given for implicit functions, (see Lukács, (955)) as follows. Lemma. Let the implicit function f (g ()), R d, where f and g are scalar valued functions and are di erentiable M times. Suppose that = (M) = M with dimensions [d d d M ]. Then for n M (.4) D n (n) f (g ()) = nx f (r) (g ()) X r= KP n jkj=r K p(k) (d n) Y bk D jbj b where p (K) is a permutation of ( n) de ned by the partition K, see Remark. g () We consider particular cases of Equation (.4) which are useful for proving some properties of cumulants..4. Cumulants in terms of moments and vice versa..4.. Cumulants in terms of moments. The results obtained here are generalizations of the well known results given for scalar random variables by Leonov and Shiryaev(959) (see also Brillinger and Rosenblatt, 967, Terdik, 999). To obtain the cumulants in terms of moments let us consider the function f (x) = ln x and g () = ' X(n) (n). The r th derivative of f (x) = ln x is f (r) (x) = ( ) r (r )!x r

7 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS 7 So, the left hand side of Equation (.4) is the cumulant of X (n). Hence we obtain (.5) Cum n (X (n) ) = nx ( ) m (m )! m= X LP (n) jlj=m K p(l) d (n) Y j=m E Y kb j X k where the second summation is taken over all possible ordered partition L P (n) with jlj = m, see Remark for details. The expectation operator E de ned such that E (X X ) = (EX EX ).4.. Moments in terms of cumulants. Let f (x) = exp x and g () = X(n) (n) Hence all the derivatives of f (x) = exp x are equal to exp x and therefore, we have n exp (g ()) (.6) = exp (g ()) X p(k) (d n) Y D jbj n bk b g () KP n K The expression for the moment EX [n] (n) is quite general, for example the moment EYk (m) (m) can be obtained from EX [n] (n), where (Y Y ) = (Y [k] m[km] Y Y m Y m ) = X (n) say i.e. the elements in the product Y k (m) (m) {z } k {z } k m are treated as they were distinct. (.7) EX [n] (n) = X LP (n) K p(l) d (n) Y bl Cum jbj (X b ) where the summation is over all ordered partitions L = fb b b k g of ( n) Cumulant of products via products of cumulants. Let X K denote the vector where the entries are Q obtained from the partition K, i.e. if K = fb b b m g then X K = X b Q X b Q X bm The order of the elements of the subsets b K and the order of the subsets in K are xed. Now the cumulant of the products can be expressed by the cumulants of the individual set of variables X b = X j j b b L such that K [ L = O, where O denotes the coarsest partition with one subset f( n)g only. Such partitions L and K are called indecomposable (see Brillinger (), Terdik, (999)). Y (.8) Cum k Xb Y Xb Y Xbm = X K Y p(l) d (n) Cum jbj(x b ) bl where X b denotes the set of vectors from X s s b. K[L=O Example. Let X be a Gaussian random vector with EX =, A and B matrices with appropriate dimensions, and Cov(X X) = Then (.9) Cum X AX X BX = Tr AB (see Taniguchi, 99). We can use (.8) to obtain (.9) as follows Cum X AX X BX = Cum (Vec A) X X (Vec B) X X = (Vec A) (Vec B) Cum (X X X X) = (Vec A) (Vec B) K $3 + K $4 [Vec Vec ] = (Vec ) (A B) Vec = Tr AB

8 8 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK 3. Applications to Statistical Inference 3.. Cumulants of the log-likelihood function. The above results can be used to obtain the cumulants of the partial derivatives of the log-likelihood function, see Skovgaard, (986). These expressions are useful in the study of the asymptotic theory of statistics. Consider a random sample (X X X N ) = X R N with the likelihood function L ( X) and let l () denote the log likelihood function, i.e. It is well known that under the regularity conditions l () = ln L ( X) R d l () l () (3.) E = E l () The result (3.) can be extended to products of several partial derivatives (see McCullagh and Cox, (986) for d = 4), who use these expressions in the evaluation of Bartlett s correction. We can arrive at the result (3.) from (.4) by observing L ( X) = e l(). Suppose d = we have and from (.4) we have e l() = e l() and equating the above two expressions we get e l() = L ( X) l () l () + l () L ( X) l () l () = + l () L ( X) The expected value of the left hand side of the above expression is zero, as we are allowed changing the order of the derivative and the integral, which gives the result (3.). The same argument leads, more generally to several partial derivatives dx X (3.) E Y " Q jbj r= bk jb l () = j KP d jkj=r and this is a consequence of (.6). Proceeding in a similar fashion assuming the regularity conditions in higher order and using (.7) we obtain the cumulant analogue of the above! dx X jbj (3.3) Cum Q jb l () b K = j r= KP d jkj=r The equation (3.) is in terms of the expected values of the derivatives of the log likelihood function, where as (3.3) is in terms of the cumulants. For example, suppose we have a single parameter and let us denote m m 3 m3 4 m4 4 (m m m 3 m 4 ) = E l () l () 3 l () 4 l () then from the formula (3.) we obtain (3.4) 4 ( ) ( ) ( ) ( ) + 4 (4 ) = To obtain (3.4) we proceed as follows. Consider the partitions K P 4, if jkj = we have only one partition ( 3 4) if jkj =, we have 4 terms of type f( 3) (4)g and 3 terms of type f( ) (4 3)g, if jkj = 3, we have 6 terms of the type f() () (4 3)g. Now if = = 3 = 4 =, then m m m 3 m 4 shows the number of the elements of the subsets in a partition, for instance (m m m 3 m 4 ) = ( )

9 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS 9 corresponds to the partitions of the type f() () (4 3)g and so on. Hence the result (3.4). McCullagh and Cox (986) equ. (), p. 4, obtained a similar result for cumulants 4 Cum 4 l () + 4 Cum l () l () + 6 Cum l () l () (3.5) l () +3 Cum l () l () + Cum l () l () l () l () = which is a special case of (3.3) Cumulants of the log-likelihood function, the multiple parameter case. The multivariate extension ( the elements of the parameter vector are vectors as well) of the formula (3.) can easily be obtained using Lemma. If we partition the vector parameters into n subsets, = (n) = n with dimensions [d d d n ] respectively, then it follows (3.6) dx X r= KP d jkj=r K p(k) (d n) E Y D jbj bk b l () = where b denotes the subset of vectors j j b Now, in particular if n = and = = then (3.6) gives the well known result Cov D l () D l () = E D D l () or in vectorized form, the same can be written as E D l () D l () In case n = 4 say and = = 3 = 4 = then we have where = E D l () 4 ( ) ( ) ( ) ( ) + 4 (4 ) = i 4 (m m m 3 m 4 ) = E hd l () m h i D l () m h i D l () m3 h i D l () m4 We can obtain a similar expression for the cumulants and it is given by dx X r= KP d jkj=r K p(k) (d n) Cum r D jbj b l () b K = 3.. Multivariate Measures of Skewness and Kurtosis for Random Vectors. In this section we de ne what we consider are natural measures of multivariate skewness and kurtosis and show their relation to the measures de ned by Mardia (97). Let X be a d dimensional random vector whose rst four moments exist. Let denote the positive-de nite variance covariance matrix. The skewness vector of X is de ned by X = Cum 3 = X = X = X = = 3 Cum3 (X X X) and the total skewness is X = X

10 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK The kurtosis vector of X is de ned by X = Cum 4 = X = X = X = X = = 4 Cum4 (X X X X) and the total kurtosis is X = Tr Vec X where Vec X is the matrix M such that Vec M = X. The skewness and kurtosis for a multivariate Gaussian vector X is zero. X is also zero for any distribution which is symmetric. The skewness and kurtosis are expressed in terms of the moments. Suppose EX = then (3.7) X = = 3 EX 3 The total skewness X, which is just the norm square of the skewness vector X, coincides with the measure of skewness d de ned by Mardia (97. For any set of random vectors, we have (3.8) Cum 4 (X 4 ) = E Y X4 Cum (X X ) Cum (X 3 X 4 ) K p $3 d [4] Cum (X X 3 ) Cum (X X 4 ) K p 4$ d [4] Cum (X X 4 ) Cum (X X 3 ) and therefore the kurtosis vector of X can be expressed in terms of the fourth order moments, by noting X = X = X 3 = X 4 = X in the above, X = = 4 Cum4 (X X X X) = = 4 EX 4 I + Kp $3 d [4] + K p 4$ d [4] (3.9) = 4 Cum (X X) Cum (X X) = = 4 EX 4 I + Kp $3 d [4] + K p 4$ d [4] [Vec Id Vec I d ] Mardia (97), de ned the measure of kurtosis as d = E X X and this is related to our total kurtosis measure X as follows Indeed d = X + d(d + ) = Tr Vec X + d(d + ) Tr Vec = 4 h i h i EX 4 = E Tr = X = X = E Tr X! = = X = E X X We note if X is Gaussian, then X = and hence d = d(d + )

11 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS 3.3. Multiple Linear Time Series. Let X t be a d dimensional discrete time stationary time series. Let X t satisfy the linear representation, (see Hannan, 97, p. 8) (3.) X t = X H (k) e t k k= where H () is identity, P kh (k)k < e t are independent and identically distributed random vectors with Ee t = Ee t e t =. Let m+ (e) = Cum m+ (e t e t e t ) be the vector dm+. We note (e) = Vec and the cumulant of X t is (3.) Cum m+ X X t X t+ X t+ X t+m = H (k) H (k + ) H (k + m ) m+ (e) k= = C m+ ( m ) Let X t satisfy the autoregressive model of order p given by which can be written as X t + A X t + A X t + + A p X t p = e t I + A B + A B + + A p B p X t = e t We assume the coe cients fa j g satisfy the usual stationarity condition (see Hannan, 97, p. ), and proceed (3.) X t = I + A B + A B + + A p B p et! X = H (k) B k e t k= where B is the backshift operator. From (3.) and (3.) we have! (3.3) I + A B + A B + + A p B p X H (k) B k = I from which we obtain H () + (H () + A H ()) B + (H () + A H () + A H ()) B + + (H (p) + A H (p ) + A H (p ) + + A p H ()) B p + + H (p + ) + A H (p) + + = I Equating powers of B j j we get (3.4) H (j) + A H (j ) + A H (j ) + + A p H (j p) = j k=

12 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK (here we use the convention H (j) = if j < ). Let substituting for H (k + ) from (3.4) into (3.) C m+ ( m ) X = H (k) [A H (k + ) + A H (k + ) + + A p H (k + p)] k= H (k + m ) m+ (e) px X = H (k) A j H (k + = = j= k= px j= k= j) H (k + ) H (k + m ) m+ (e) X [I d A j I d m ] [H (k) H (k + j) H (k + ) H (k + m )] m+ (e) px (I d A j I d m ) C m+ ( j m ) j= Thus we obtain (3.5) C m+ ( m ) = px (I d A j I d m ) C m+ ( j m ) j= If we put m = in (3.5) we get C ( ) = px (I d A j ) C ( j) j= which can be written in matrix form C ( ) = px A j C ( j) j= which is well known Yule-Walker equation in terms of second order covariances. Therefore we can consider (3.5) as an extension of Yule-Walker equations in terms of higher order cumulants for multivariate autoregressive models. The de nition of the higher order cumulant spectra for stationary time series comes in a natural way. Consider the time series X t with (m + ) th order cumulant function Cum m+ X t X t+ X t+ X t+m = Cm+ ( m ) and de ne the m th order cumulant spectrum as the Fourier transform of the cumulants S m (!!! m ) = X C m+ ( m ) exp mx i j! j A m= provided that the in nite sum converges. We note here that the connection between the usual matrix notation for the second order spectrum S (!) is that see the (.). S (!) = Vec [S (!)] j=

13 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS Bhattacharya-type lower bound for the multiparameter case. In this section we obtain a lower bound for the variance covariance matrix of an unbiased vector of statistics which is a linear function of the rst k partial derivatives. This corresponds to the well known Bhattacharya bound (see Bhattacharya (946), Linnik (97)) for the multiparameter case, which does not seem to have been considered anywhere in the literature. Consider a random sample (X X X n ) = X R nd, with likelihood function L ( X), R d. Suppose we have a vector of unbiased estimators, say, bg (X) of g () R d. De ne the random vectors Df = L ( X) D L ( X) = bg (X) Df L ( X) D L ( X) L ( X) Dk L ( X) where the dimension of is d + d + d + + d k. The second order cumulant between bg (X) and the derivatives L(X) Dj L ( X), (j = k) is as follows Z h i Cum bg (X) L ( X) Dj L ( X) = Vec D j L ( X) bg (x) dx Z = bg (x) D j L ( X) dx The covariance matrix between bg (X) and L(X) Dj = D j g () L ( X) is calculated using (.). The variance matrix Var Df is singular because the elements of the derivatives D j L ( X) are not distinct. Therefore we reduce the vector of derivatives using distinct elements only. To make it precise we rst consider second order derivatives. We de ne the duplication matrix D d which reduces the symmetric matrix V d to the matrix (V d ) which is the vector of lower triangular elements of V d. We de ne D d as follows D d (V d ) = Vec V d The dimension of (V d ) is d (d + ) =, and D d is of d d (d + ) =. It is easy to see that D d D d is non-singular (the columns of D d are linearly independent, each row has exactly one nonzero element), therefore the Moore-Penrose inverse D + d of D d is such that D + d = D dd d D d (V d ) = D + d Vec V d (see Magnus and Neudecker (999), Ch. 3 Sec. 8, for details). The operator D which is actually The matrix the duplication matrix D = Vec is de ned by is symmetric and therefore we can use the inverse D + d of D + d D = to get the necessary elements of the derivatives. We can extend this procedure for higher order derivatives by de ning D + kd Dk = k D k D

14 4 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK where k D k is a vector of the distinct elements of D k C gj = Cov bg (X) L ( X), listed in the original order in Dk. Now let D + jd Dj L ( X) where the entries of C gj are those of the entries of the cumulant matrix Cum bg (X) D + jd L ( X) Dj L ( X) Now considering the vector of all distinct and nonzero derivatives, Df = L ( X) D L ( X) L ( X) D+ d D L ( X) L ( X) D+ kd Dk L ( X) = bg (X) Df we obtain the generalized Bhattacharya lower bound in the case of multiple parameters. This is obtained by considering the variance matrix of which is positive semi de nite which implies (3.6) Var bg (X) C gdf Var Df C gdf where the matrix C gdf = [C g C g C gk ] The Cramer- Rao inequality is obtained by setting k =, i.e. by considering only the rst derivative vector. Let us now consider an example to illustrate the Bhattacharya bound given by (3.6). Example 3. Let (X X X n ) = X R nd be a sequence of independent Gaussian random vectors with mean vector R d and variance matrix I d. Suppose we want to estimate the function g () = kk R. Here d = d d =. The unbiased estimator for g () is bg (X) = dx k= where X k is the sample mean computed using the random sample consisting of n observations on the k th random variable of the random vector X. The variance of the estimator bg (X) is (3.7) Var bg (X) = X k dx k= n 4 k n + n = 4 n kk + d n The Cramer-Rao bound for this estimator is 4 n kk which is strictly less than the actual variance. The derivatives D j L ( X) for j > are zero. For j = we have D L ( X) = n X L ( X) D L ( X) = n X n Vec I d L ( X) therefore we obtain (using all the elements of second partial derivative matrix) e Df = L ( X) D L ( X) L ( X) D L ( X) = n X n X! n Vec I d

15 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS 5 Note that if we consider only the vector of rst derivatives, then the second element of above vector will not be included in the lower bound, making the Cramer-Rao bound smaller. If we use the reduced number of elements for e Df, we have Df = n X! n D + d X n D+ d Vec I d and the variance matrix of Df will contain n C = n Vec d d D + d Cum h = Vec d d D + d Denote and then the matrices satisfy = D + d Id + K p$ d [4] D + d Kp $3 d [4] + K p $3 Id + K p$ d [4] = Nd N d = N d = N d N d = D d D + d X n Vec I d d [4] (Vec Id ) i (see Magnus and Neudecker (999), Ch. 3 Sec. 7-8, Theorem and ). We obtain n C = D + d N d D + N d = D + d D d + = D d D d which is invertible. The inverse of the variance matrix of Df is given by Var Df = n I d n D d D d Now to obtain the matrix C gdf = [C g C g ] we need Cum bg (X) L ( X) D L ( X) = D g () and Cum bg (X) L ( X) D = D = C g = L ( X) = D + d Vec I d C g = D + d Vec I d Finally we obtain C gdf Var Df C gdf = 4 n kk + n (Vec I d) D d D + d Dd D + d Vec I d = 4 n kk + n (Vec I d) N d Vec I d = 4 n kk + d n which is the Bhattacharya bound and is same as the variance of the statistic bg (X), given by (3.7).

16 6 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK 4. Appendix 4.. Commutation Matrices. The Kronecker products have the advantage in the sense that we can commute the elements of the products using linear operators called commutation matrices (see Magnus and Neudecker (999), Ch. 3 Sec. 7, for details). We use these operators here in the case of vectors. Let A be a matrix of order m n and the vector Vec A is a permutation of the vector Vec A Therefore there exists a permutation matrix K mn of order mn mn, called commutation matrix, which is de ned by the relation K mn Vec A = Vec A Now suppose if a is m and b is n then K mn (b a) = K mn Vec ab = Vec ba = a b >From now on in the sequel, we shall use a more convenient notation, K mn = K (n m) which means that we are changing the order in a K product b a of vectors b R n and a R m. Now consider a set of vectors (a a a n ) with dimensions d (n) = (d d d n ) respectively. De ne the matrix Y K j+$j d (n) = I d i K (d j d j+ ) Y I d i i=j i=j+n where Q i=j stands for the Kronecker product of the matrices indexed by j = ( j) Clearly K j+$j (d n ) Y i=n a i = Y = Y i=j i=j (I d i a i ) (K (d j d j+ ) (a j a j+ )) Y a i a j+ a j Y i=j+n a i i=j+n (I d i a i ) Therefore one is able to transpose (interchange) the elements a j and a j+ in a Kronecker product of vectors by the help of the matrix K j$j+ (d n ) In general K j$j+ (d n) = K j$j+ (d n) but K j+$j 6= K j$j+ because of the dimensions d j+ and d j are not necessarily equal. If they are equal then K j+$j = K j$j+ = K j$j+ = K j$j+ We remind that P n denotes the set of all permutations of the numbers ( n) = ( n) if p P n then p ( n) = (p () p () p (n)) From this it follows that for each permutation p ( n) = (p () p () p (n)) p P n, there exists a matrix K p(n) (d n ) such that Y (4.) K p(n) d (n) a i = Y a p(i) i=n i=n just because any permutation p ( n) can obtained from the product by transposition of neighboring elements. Since there is an inverse of the permutation p ( n), therefore there exists an inverse K p(n) (d n) for K p(n) (d n ) as well. Note that the entries of d n are not necessary equal, they are the dimensions of the vectors a i i = n which is xed. The following example shows that K p(n) (d n ) is uniquely de- ned by the permutation p ( n) and the set d n The permutation p!4 is the product of two interchanges p!3 and p 3!4 i.e. K p!4 (d 4 ) = K p3!4 (d d 3 d d 4 ) K p!3 (d 4 ) = (I d I d3 K d4d ) (I d K d3d I d4 ) This process can be followed for any permutation p ( n) and for any set d n of the dimensions. In particular transposing two elements only, j and k, in the product will be denoted by K pj$k (d n ). It will not be confusing to use both notations K j$k and K pj$k also K j!k and K pj!k for the same operators. It can be seen that (4.) K j$k = K j$k = K k$j Let A be m n and B be p q matrices, it is well known that K $ (m p) (A B) K $ (q n) = B A

17 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS 7 The same argument to the case of vectors Kronecker product leads to the technic of permuting matrices in a Kronecker product by the help of commutation matrix K p. Using the above notation we can write (4.3) Vec (A B) = (I n K (m q) I p ) Vec A Vec B = K $3 (n m q p) Vec A Vec B 4.. Di erential operators. First we introduce the Jacobian matrix and higher order derivatives. Let = ( d ) R d and let () = [ () () m ()] be a vector valued function which is di erentiable in all its arguments (here and elsewhere denotes the transpose). The Jacobian matrix of is de ned by D = = () = d 6 4 d m here, and later on, the di erential operator = j is acting from right to left keeping the matrix calculus valid. We can write this in a vector form as follows De nition 4. The operator D which is a column vector of order md. is de ned as D = Vec = Vec 6 4 d m We refer to D as K derivative and we can also write D as a Kronecker product. D = Vec = Vec If we repeat the di erentiation D = [ () () m ()] twice, we obtain D = D D = Vec = = and in general (suppose the di erentiability k times), the k th K D k = D D k m d = [ () () m ()] d m d derivative is given by which is a column vector of order md k, containing all possible partial derivatives of entries of according k to the Kronecker product d d k

18 8 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK In the following, we give some additional properties of this operator D when applied to products of several functions. Let K 3$ (m m d) denote the commutation matrix of size m m dm m d, changing the order in a Kronecker product of three vectors of dimension (m m d) (see the Appendix for details), such that the second and third places are interchanged. For example if a a a 3 are vectors of dimension m m d respectively then we have K 3$ (m m d) as the matrix de ned such that K 3$ (m m d) (a a a 3 ) = a a 3 a Property (Chain Rule). If R d R m and R m then (4.4) D = K3$ (m m d) D + D where K 3$ (m m d) denotes the commutation matrix. This Chain Rule (4.4) can be extended to products of several functions. If k R m k k = M then D Y (M) k = M X j= Y Kp M+!j (m M d) (j ) k h D j () i Y (j+m) k here the commutation matrix K pm+!j (m M d) permutes the vectors of dimension (m M d), in the Kronecker product according to the permutation p M+!j of the integers ( M + ) = ( M + ). Consider the special case, () = k. Di erentiating according to the de nition gives (4.5) D k = Vec kx = j= k K j+$k! = k d [k] A (k ) I d d where d [k] = [d d d]. {z } k Now suppose () = x k k where the vector x is a vector of constants. Now is a scalar valued function. By using the Property and after di erentiating r times we obtain h (4.6) D r xk k = k (k ) (k r + ) (x ) k r x ri The reason for (4.6) is that the Kronecker product x k is invariant under the permutation of its component vectors x i.e. x l K j+!l d [l] = x l for any l and j so that x k kx j= K j+!k d [k] A = kx k and thus we obtain (4.6). In particular if r = k D k xk k = k!x k

19 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS Taylor series expansion of functions of several variables. Let () = ( d ) and assume is di erentiable several times in each variable. Here our object is to expand () in Taylor series, and the expression is given in terms of di erential operators given above. We use this expansion later to de ne the characteristic function and the cumulant functions in terms of the di erential operators. Let = (d) = ( d ) R d It is well known that the Taylor series of () is (4.7) () = where the coe cients are here we used the notation X k k k d = c (k) = k () k k! c (k) k = k = (k k k d ) k! = k!k! k d! dy k = kj j k = k k k d d j= The Taylor series (4.7) can be written in a more informative form for our purposes, namely, X () = m! c (m d) m m= where c (m d) is a column vector, which is the derivative (K derivative) of the function given by c (m d) = D m () = Taylor series in terms of di erential operators. We have X () = c (k) k k! this can be re-written in the form where c (m d) is a column vector = k k k d = X m= () = m! c (m d) = X m= mx k k k d = k j=m m! k! c (k) k m! c (m d) m D m () = with appropriate entries from the vectors fc (k) k j = mg the dimension of c (m d) is same as m, i.e. d m To obtain the above expansion we proceed as follows. Let x R d be a real vector and consider m dx (x ) m = x j j A = j= mx k k k d = k j=m m! k! xk k

20 S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK and we can also write Therefore (x ) m = (x ) m = x m m mx k k k d = k j=m m! k! xk k = x m m The entries of the vector c (m d) correspond to the operator k k having the same symmetry as xk therefore if x m is invariant under some permutation of its factors then c (m d) is invariant as well. From Equation (4.6) we obtain that c (m d) = D m () = References [] G. E. Andrews, (976). The theory of partitions, Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam, Encyclopedia of Mathematics and its Applications, Vol.. [] O. E. Barndor -Nielsen, and D. R. Cox, (989). Asymptotic techniques for use in statistics, Monographs on Statistics and Applied Probability, Chapman & Hall, London. [3] A. Bhattacharya, (946) On some analogues to the amount of information and their uses in statistical estimation. Sankhya 8, -4. [4] D. R. Brillinger, (). Time Series Data Analysis and Theory, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, Reprint of the 98 edition. [5] D. R. Brillinger and M. Rosenblatt, (967) Asymptotic Theory of k th order spectra. Spectral Analysis of Time Series, ed. B. Harris, 53-88, New York, Wiley. [6] E. J. Hannan, (97). Multiple time series, John Wiley and Sons, Inc., New York-London-Sydney. [7] V. P. Leonov, and A. N. Shiryaev, (959) On a method of calculation semi-invariants. Theor. Prob. Appl. 4, [8] Yu. V. Linnik, (97). A note on Rao-Cramer and Bhattacharya inequalities, Sankhya Ser. A, vol. 3, [9] E. Lukacs, (955). Applications of Faa di Bruno s formula in statistics. Am. Math. Monthly, 6, [] K. V. Mardia, (97). Measures of multivariate skewness and kurtosis with applications, Biometrika 57, [] P. McCullagh, (987). Tensor methods in statistics, Monographs on Statistics and Applied Probability, Chapman & Hall, London. [] P. McCullagh and D. R. Cox, (986). Invariants and likelihood ratio statistics, Ann. Statist. 4, no. 4, [3] J. R. Magnus and H. Neudecker, (999). Matrix di erential calculus with applications in statistics and econometrics, John Wiley & Sons Ltd., Chichester, Revised reprint of the 988 original. [4] Ib Skovgaard, (986). A note on the di erentiation of cumulants of log likelihood derivatives, Internat. Statist. Rev.,54, 9 3. [5] T. P. Speed, (99). Invariant moments and cumulants, Coding theory and design theory, Part II, IMA Vol. Math. Appl., vol., , Springer-Verlag, New York. [6] T. Subba Rao and W. K. Wong, (999). Some Contributions to Multivariate Nonlinear Time Series and to Bilinear Models, Asymptotics, Nonparametrics and Time Series, ed. Ghosh, S., -4, Marcel Dekker Inc., New York. [7] M. Taniguchi, (99). Higher order asymptotic theory for time series analysis, Lecture Notes in Statistics, vol. 68, Springer-Verlag, New York. [8] Gy. Terdik, (999). Bilinear Stochastic Models and Related Problems of Nonlinear Time Series Analysis A Frequency Domain Approach, vol. 4 of Lecture Notes in Statistics, Springer Verlag, New York. [9] Gy. Terdik, (). Higher order statistics and multivariate vector Hermite polynomials for nonlinear analysis of multidimensional time series, Teor. Ver. Matem. Stat.(Teor. Imovirnost. ta Matem. Statyst.) 66, [] W. K. Wong, (997). Frequency domain tests of multivariate Gaussianity and linearity, J. Time Ser. Anal., 8()8 94.

21 CUMULANTS OF RANDOM VECTORS AND STATISTICAL APPLICATIONS Department of Statistics and Applied Probability, University of California,, Santa Barbara, CA 936-3, USA address raopstat.ucsb.edu URL http// School of Mathematics, University of Manchester, PO Box 88, MANCHESTER M6 QD, UK address Tata.SubbaRaomanchester.ac.uk Faculty of Informatics, University of Debrecen, 4 Debrecen, Pf., HU address terdikdelfin.unideb.hu URL http//dragon.unideb.hu/~terdik/

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES HIGHER ORDER CUMULANTS OF RANDOM VECTORS DIFFERENTIAL OPERATORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES S RAO JAMMALAMADAKA T SUBBA RAO AND GYÖRGY TERDIK Abstract This paper provides

More information

Publications (in chronological order)

Publications (in chronological order) Publications (in chronological order) 1. A note on the investigation of the optimal weight function in estimation of the spectral density (1963), J. Univ. Gau. 14, pages 141 149. 2. On the cross periodogram

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Web Appendix to Multivariate High-Frequency-Based Volatility (HEAVY) Models

Web Appendix to Multivariate High-Frequency-Based Volatility (HEAVY) Models Web Appendix to Multivariate High-Frequency-Based Volatility (HEAVY) Models Diaa Noureldin Department of Economics, University of Oxford, & Oxford-Man Institute, Eagle House, Walton Well Road, Oxford OX

More information

Non-Gaussian Maximum Entropy Processes

Non-Gaussian Maximum Entropy Processes Non-Gaussian Maximum Entropy Processes Georgi N. Boshnakov & Bisher Iqelan First version: 3 April 2007 Research Report No. 3, 2007, Probability and Statistics Group School of Mathematics, The University

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

Matrix Differential Calculus with Applications in Statistics and Econometrics

Matrix Differential Calculus with Applications in Statistics and Econometrics Matrix Differential Calculus with Applications in Statistics and Econometrics Revised Edition JAN. R. MAGNUS CentERjor Economic Research, Tilburg University and HEINZ NEUDECKER Cesaro, Schagen JOHN WILEY

More information

Simple Estimators for Semiparametric Multinomial Choice Models

Simple Estimators for Semiparametric Multinomial Choice Models Simple Estimators for Semiparametric Multinomial Choice Models James L. Powell and Paul A. Ruud University of California, Berkeley March 2008 Preliminary and Incomplete Comments Welcome Abstract This paper

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

MATHEMATICAL PROGRAMMING I

MATHEMATICAL PROGRAMMING I MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

MC3: Econometric Theory and Methods. Course Notes 4

MC3: Econometric Theory and Methods. Course Notes 4 University College London Department of Economics M.Sc. in Economics MC3: Econometric Theory and Methods Course Notes 4 Notes on maximum likelihood methods Andrew Chesher 25/0/2005 Course Notes 4, Andrew

More information

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen

More information

Second-order approximation of dynamic models without the use of tensors

Second-order approximation of dynamic models without the use of tensors Second-order approximation of dynamic models without the use of tensors Paul Klein a, a University of Western Ontario First draft: May 17, 2005 This version: January 24, 2006 Abstract Several approaches

More information

Linear Algebra Short Course Lecture 2

Linear Algebra Short Course Lecture 2 Linear Algebra Short Course Lecture 2 Matthew J. Holland matthew-h@is.naist.jp Mathematical Informatics Lab Graduate School of Information Science, NAIST 1 Some useful references Introduction to linear

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September

More information

Asymptotics of generating the symmetric and alternating groups

Asymptotics of generating the symmetric and alternating groups Asymptotics of generating the symmetric and alternating groups John D. Dixon School of Mathematics and Statistics Carleton University, Ottawa, Ontario K2G 0E2 Canada jdixon@math.carleton.ca October 20,

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Parametric Inference on Strong Dependence

Parametric Inference on Strong Dependence Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation

More information

Introduction: structural econometrics. Jean-Marc Robin

Introduction: structural econometrics. Jean-Marc Robin Introduction: structural econometrics Jean-Marc Robin Abstract 1. Descriptive vs structural models 2. Correlation is not causality a. Simultaneity b. Heterogeneity c. Selectivity Descriptive models Consider

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Time Series Analysis. Asymptotic Results for Spatial ARMA Models

Time Series Analysis. Asymptotic Results for Spatial ARMA Models Communications in Statistics Theory Methods, 35: 67 688, 2006 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092050049893 Time Series Analysis Asymptotic Results

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Lecture Notes Part 2: Matrix Algebra

Lecture Notes Part 2: Matrix Algebra 17.874 Lecture Notes Part 2: Matrix Algebra 2. Matrix Algebra 2.1. Introduction: Design Matrices and Data Matrices Matrices are arrays of numbers. We encounter them in statistics in at least three di erent

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Widely applicable periodicity results for higher order di erence equations

Widely applicable periodicity results for higher order di erence equations Widely applicable periodicity results for higher order di erence equations István Gy½ori, László Horváth Department of Mathematics University of Pannonia 800 Veszprém, Egyetem u. 10., Hungary E-mail: gyori@almos.uni-pannon.hu

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product

More information

Corrigendum to Inference on impulse. response functions in structural VAR models. [J. Econometrics 177 (2013), 1-13]

Corrigendum to Inference on impulse. response functions in structural VAR models. [J. Econometrics 177 (2013), 1-13] Corrigendum to Inference on impulse response functions in structural VAR models [J. Econometrics 177 (2013), 1-13] Atsushi Inoue a Lutz Kilian b a Department of Economics, Vanderbilt University, Nashville

More information

On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices

On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices Applied Mathematics E-Notes, 1(01), 18-188 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/amen/ On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Mohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983),

Mohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983), Mohsen Pourahmadi PUBLICATIONS Books and Editorial Activities: 1. Foundations of Time Series Analysis and Prediction Theory, John Wiley, 2001. 2. Computing Science and Statistics, 31, 2000, the Proceedings

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

ECON2285: Mathematical Economics

ECON2285: Mathematical Economics ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal

More information

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type. Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,

More information

Linear Algebra. James Je Heon Kim

Linear Algebra. James Je Heon Kim Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,

More information

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences Applied Mathematics E-Notes, 2(202), 4-22 c ISSN 607-250 Available free at mirror sites of http://www.math.nthu.edu.tw/amen/ Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences Abar Zada

More information

Linear estimation in models based on a graph

Linear estimation in models based on a graph Linear Algebra and its Applications 302±303 (1999) 223±230 www.elsevier.com/locate/laa Linear estimation in models based on a graph R.B. Bapat * Indian Statistical Institute, New Delhi 110 016, India Received

More information

component risk analysis

component risk analysis 273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

On Hadamard and Kronecker Products Over Matrix of Matrices

On Hadamard and Kronecker Products Over Matrix of Matrices General Letters in Mathematics Vol 4, No 1, Feb 2018, pp13-22 e-issn 2519-9277, p-issn 2519-9269 Available online at http:// wwwrefaadcom On Hadamard and Kronecker Products Over Matrix of Matrices Z Kishka1,

More information

MA 8101 Stokastiske metoder i systemteori

MA 8101 Stokastiske metoder i systemteori MA 811 Stokastiske metoder i systemteori AUTUMN TRM 3 Suggested solution with some extra comments The exam had a list of useful formulae attached. This list has been added here as well. 1 Problem In this

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions

More information

Lecture 6: Contraction mapping, inverse and implicit function theorems

Lecture 6: Contraction mapping, inverse and implicit function theorems Lecture 6: Contraction mapping, inverse and implicit function theorems 1 The contraction mapping theorem De nition 11 Let X be a metric space, with metric d If f : X! X and if there is a number 2 (0; 1)

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Hamilton Cycles in Digraphs of Unitary Matrices

Hamilton Cycles in Digraphs of Unitary Matrices Hamilton Cycles in Digraphs of Unitary Matrices G. Gutin A. Rafiey S. Severini A. Yeo Abstract A set S V is called an q + -set (q -set, respectively) if S has at least two vertices and, for every u S,

More information

EVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS

EVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS EVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS CHARLES HELOU AND JAMES A SELLERS Abstract Motivated by a recent work about finite sequences where the n-th term is bounded by n, we evaluate some classes

More information

Linearized methods for ordinary di erential equations

Linearized methods for ordinary di erential equations Applied Mathematics and Computation 104 (1999) 109±19 www.elsevier.nl/locate/amc Linearized methods for ordinary di erential equations J.I. Ramos 1 Departamento de Lenguajes y Ciencias de la Computacion,

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

Testing for Regime Switching: A Comment

Testing for Regime Switching: A Comment Testing for Regime Switching: A Comment Andrew V. Carter Department of Statistics University of California, Santa Barbara Douglas G. Steigerwald Department of Economics University of California Santa Barbara

More information

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review Chapter III Quantum Computation These lecture notes are exclusively for the use of students in Prof. MacLennan s Unconventional Computation course. c 2017, B. J. MacLennan, EECS, University of Tennessee,

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

DESCRIPTION OF SIMPLE MODULES FOR SCHUR SUPERALGEBRA S(2j2)

DESCRIPTION OF SIMPLE MODULES FOR SCHUR SUPERALGEBRA S(2j2) DESCRIPTION OF SIMPLE MODULES FOR SCHUR SUPERALGEBRA S(22) A.N. GRISHKOV AND F. MARKO Abstract. The goal of this paper is to describe explicitly simple modules for Schur superalgebra S(22) over an algebraically

More information

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Stable Process. 2. Multivariate Stable Distributions. July, 2006 Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. Spanning set Let S be a subset of a vector space V. Definition. The span of the set S is the smallest subspace W V that contains S. If

More information

2 Formulation. = arg = 2 (1)

2 Formulation. = arg = 2 (1) Acoustic di raction by an impedance wedge Aladin H. Kamel (alaahassan.kamel@yahoo.com) PO Box 433 Heliopolis Center 11757, Cairo, Egypt Abstract. We consider the boundary-value problem for the Helmholtz

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Math 313 Chapter 1 Review

Math 313 Chapter 1 Review Math 313 Chapter 1 Review Howard Anton, 9th Edition May 2010 Do NOT write on me! Contents 1 1.1 Introduction to Systems of Linear Equations 2 2 1.2 Gaussian Elimination 3 3 1.3 Matrices and Matrix Operations

More information

Polynomial functions over nite commutative rings

Polynomial functions over nite commutative rings Polynomial functions over nite commutative rings Balázs Bulyovszky a, Gábor Horváth a, a Institute of Mathematics, University of Debrecen, Pf. 400, Debrecen, 4002, Hungary Abstract We prove a necessary

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998 CHAPTER 0 PRELIMINARY MATERIAL Paul Vojta University of California, Berkeley 18 February 1998 This chapter gives some preliminary material on number theory and algebraic geometry. Section 1 gives basic

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Simple Estimators for Monotone Index Models

Simple Estimators for Monotone Index Models Simple Estimators for Monotone Index Models Hyungtaik Ahn Dongguk University, Hidehiko Ichimura University College London, James L. Powell University of California, Berkeley (powell@econ.berkeley.edu)

More information

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER By Donald W. K. Andrews August 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1815 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract BUMPLESS SWITCHING CONTROLLERS William A. Wolovich and Alan B. Arehart 1 December 7, 1995 Abstract This paper outlines the design of bumpless switching controllers that can be used to stabilize MIMO plants

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

Renormalization of the fermion self energy

Renormalization of the fermion self energy Part I Renormalization of the fermion self energy Electron self energy in general gauge The self energy in n = 4 Z i 0 = ( ie 0 ) d n k () n (! dimensions is i k )[g ( a 0 ) k k k ] i /p + /k m 0 use Z

More information

Appendix A. Math Reviews 03Jan2007. A.1 From Simple to Complex. Objectives. 1. Review tools that are needed for studying models for CLDVs.

Appendix A. Math Reviews 03Jan2007. A.1 From Simple to Complex. Objectives. 1. Review tools that are needed for studying models for CLDVs. Appendix A Math Reviews 03Jan007 Objectives. Review tools that are needed for studying models for CLDVs.. Get you used to the notation that will be used. Readings. Read this appendix before class.. Pay

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information