Complex random vectors and Hermitian quadratic forms
|
|
- Eugene Webster
- 6 years ago
- Views:
Transcription
1 Complex random vectors and Hermitian quadratic forms Gilles Ducharme*, Pierre Lafaye de Micheaux** and Bastien Marchina* * Université Montpellier II, I3M - EPS ** Université de Montréal, DMS 26 march 2013 Bastien Marchina (UM2) Complex random vectors 26 march / 49
2 1 Introduction 2 Complex random vectors 3 Hermitian quadratic forms in complex random vectors 4 Statistics of complex random vectors 5 Applications to goodness-of-t tests Bastien Marchina (UM2) Complex random vectors 26 march / 49
3 Introduction 1 Introduction 2 Complex random vectors 3 Hermitian quadratic forms in complex random vectors 4 Statistics of complex random vectors 5 Applications to goodness-of-t tests Bastien Marchina (UM2) Complex random vectors 26 march / 49
4 Introduction Complex random data Complex data appears in various situations. For instance, periodic signals can be representated by (a collection of) complex numbers. For instance, radar and fmri data are usually aggregated as complex-valued data. Thus arises the need for statistical modeling using complex random elements. Many results have already been established by the signal processing community, but are not well known in the statistical community. Bastien Marchina (UM2) Complex random vectors 26 march / 49
5 Introduction Example of complex random data Fig.: fmri data representation Bastien Marchina (UM2) Complex random vectors 26 march / 49
6 Introduction Motivation : fmri activation with complex data D.B. Rowe and B.R. Logan (2004) une normality assumptions to build a fmri activation model using all the complex signal. Fig.: fmri data representation Thus arises the need for a proper test for complex normality. Bastien Marchina (UM2) Complex random vectors 26 march / 49
7 Introduction Complex random variables in mathematical statistics The probability distribution P of X can be characterised by its characteristic function. ϕ X (t) = E(e i(t X) ). The empirical characteristic function ϕ n (t) = 1 n n k=1 e i(t X k ) of independent copies X 1,..., X n, of X P is an estimator of the characteristic function of P and is complex valued. Bastien Marchina (UM2) Complex random vectors 26 march / 49
8 Complex random vectors 1 Introduction 2 Complex random vectors 3 Hermitian quadratic forms in complex random vectors 4 Statistics of complex random vectors 5 Applications to goodness-of-t tests Bastien Marchina (UM2) Complex random vectors 26 march / 49
9 Complex random vectors Complex Random Vectors If X and Y are random vectors in R d, Z = X + iy is a random vector in C d. Z = (Z, Z H ) is the augmented vector associated with Z and e X = MZ e, Z e = 2M H X, MM H = 2I d, (1) with X = (X, Y ) and M = 1 ( ) ( ) Id I d, M 1 Id ii = d. (2) 2 ii d ii d I d ii d The characteristic function of Z is ϕ Z (ν) = E ( e i Re(νH Z) ). (3) Bastien Marchina (UM2) Complex random vectors 26 march / 49
10 Complex random vectors First and second order parameters for complex random vectors Expectation : µ = E(Z) = µ X + iµ Y, Positive semi-denite hermitian covariance matrix : Γ = E [ (Z µ)(z µ) H], (4) Positive semi-denite symmetric relation matix : [ P = E (Z µ)(z µ) ], (5) Positive semi-denite hermitian covariance-relation matrix ( ) Γ P = E[(Z µ )(Z µ ) H Γ P ] = e e e e P H Γ. (6) Bastien Marchina (UM2) Complex random vectors 26 march / 49
11 Complex random vectors Complex normal distribution Denition (Van den Bos (1995), or Picinbono (1996)) Z = X + iy C d is following the complex normal distribution i ( (( ) ( )) X µx ΣXX Σ N, XY. (7) Y) µ Y Σ YX Σ YY We write Z CN d (µ, Γ, P), or alternatively Z C N 2d (µ, Γ P ) with e e e Σ X = MΓ P M H, Γ P = M 1 Σ X (M H ) 1. (8) Bastien Marchina (UM2) Complex random vectors 26 march / 49
12 Complex random vectors Proper normal distribution Consider Z CN d (µ, Γ, 0), i.e., Z is a complex normal random vector with P = 0. Then, f Z (z) = 1 π d Γ 1/2 exp { (z µ) H Γ 1 (z µ) }. (9) This is the density of the complex normal distribution with two parameter introduced by Wooding (1956). It is usually refered as the proper or circular complex normal distribution. Bastien Marchina (UM2) Complex random vectors 26 march / 49
13 Complex random vectors Proper and improper complex normal data kn = 0 Partie réelle Partie imaginaire kn = 0.3 Partie réelle Partie imaginaire kn = 0.6 Partie réelle Partie imaginaire kn = 0.9 Partie réelle Partie imaginaire Bastien Marchina (UM2) Complex random vectors 26 march / 49
14 Complex random vectors Characteristic function and density The characteristic function of Z is { ϕ Z (l) = exp i Re(l H µ) 1 ( l H Γl Re(l H Pl ) )}. (10) 4 If Γ P is invertible 1 f Z (z) = π d Γ P 1/2 exp { 1 ( 2 ((z µ), (z µ) H z µ )Γ 1 P (z µ) )}. (11) Bastien Marchina (UM2) Complex random vectors 26 march / 49
15 Complex random vectors Complex normal distribution Main results Conservation by linear transforms Let Z CN d (µ, Γ, P) and A C m d, then AZ CN m (Aµ, AΓA H, APA ). (12) Conservation by augmented linear transforms Let Z C N 2d (µ, Γ P ), A B C 2m 2d such that e e e ( ) A B A B =, (13) then A B Z C N 2m (A B µ, A B Γ P A H). B e e e B A Bastien Marchina (UM2) Complex random vectors 26 march / 49
16 Complex random vectors Complex normal distribution Independence Theorem : Independence between complex gaussian variables Let Z = (Z 1, Z 2 ) CN 2 (µ, Γ, P). Z 1 and Z 2 are independent if and only if ( ) γ1 0 Γ =, 0 γ 2 and ( ) p1 0 P =. 0 p 2 Bastien Marchina (UM2) Complex random vectors 26 march / 49
17 Complex random vectors Complex normal distribution Independence Corollary : Independence between complex gaussian vectors Let Z CN d1 +d 2 (µ, Γ, P), Partition Z as (Z ( 1, Z 2 ) where ) Z 1 is d 1 1, Z 2 Γ1 Γ is d 2 2, and likewise µ into (µ 1, µ 2 ), Γ 12 in Γ H and similarly for P. 12 Γ 2 Z 1 and Z 2 are independent if and only if Γ 12 = P 12 = 0. Corollary : Independence between components of a complex gaussian vector Let Z = (Z 1,..., Z d ) CN d (µ, Γ, P). The components Z 1,..., Z d are independent if and only if Γ and P are diagonal. Bastien Marchina (UM2) Complex random vectors 26 march / 49
18 Hermitian quadratic forms 1 Introduction 2 Complex random vectors 3 Hermitian quadratic forms in complex random vectors 4 Statistics of complex random vectors 5 Applications to goodness-of-t tests Bastien Marchina (UM2) Complex random vectors 26 march / 49
19 Hermitian quadratic forms Hermitian quadratic forms Let Z CN d (µ, Γ, P). We study positive quadratic forms of the form Z e H RZ e. First, notice that with S = MRM 1. It leads to Z e H RZ e = 2X SX, (14) R = M 1 SM ( ) S11 + S = 22 + i(s 12 S 21 ) S 11 S 22 + i(s 12 + S 21 ), S 11 S 22 i(s 12 + S 21 ) S 11 + S 22 i(s 12 S 21 ) and nally, because S is a symmetric matrix, ( ) G K R =. (15) K H G Bastien Marchina (UM2) Complex random vectors 26 march / 49
20 Hermitian quadratic forms Hermitian quadratic forms Theorem Let and Z CN d (µ, Γ, P). Then ( G K R = Z e H RZ e = K H 2d k=1 G ). (16) α k χ 2 1(δ 2 k ), (17) where the χ 2 1 (δ2 k ) are independent, the α k are the eigenvalues of RΓ P and the δ k are function of µ, Γ, P and R. Bastien Marchina (UM2) Complex random vectors 26 march / 49
21 Hermitian quadratic forms Hermitian quadratic forms Corollary Let and Z CN d (0, Γ, P). Then ( G K R = Z e H RZ e = K H G ). (18) 2d α k χ 2 1, (19) where the χ 2 1 are independent and the α k are the eigenvalues of RΓ P. k=1 Bastien Marchina (UM2) Complex random vectors 26 march / 49
22 Hermitian quadratic forms Moore-Penrose inverse The Moore-Penrose inverse of A is the only matrix such that AA + A = A A + AA + = A + (AA + ) H = AA + (A + A) H = A + A, A few of the properties of the Moore-Penrose are 1 Let α 0. Then (αa) + = α 1 A +, 2 (A ) + = (A + ), (A ) + = (A + ) and (A H ) + = (A + ) H, 3 Let A be m n complex and B be n p complex. If AA H = I m or B H B = I p, then (AB) + = B + A +, 4 If A is invertible, then A + = A 1. Bastien Marchina (UM2) Complex random vectors 26 march / 49
23 Hermitian quadratic forms More results on hermitian quadratic forms Theorem Let Z CN d (0, Γ, P). Let Γ + be the Moore-Penrose inverse of the P covariance-relation matrix Γ P of Z. Then, ξ = Z e H Γ + P Z e χ2 q, (20) where q 2d is the rank of Γ P. Corollary Let Z CN d (0, Γ, P), such that Γ P is invertible. Then, ξ = Z e H Γ 1 P Z e χ2 2d. (21) Bastien Marchina (UM2) Complex random vectors 26 march / 49
24 Hermitian quadratic forms More results on hermitian quadratic forms Theorem Let Z CN d (µ, Γ, P), Z e = ( ) Z, m = Z ( ) µ, A and B two 2d 2d µ matrices, that share properties with matrix R in (18). Then Z e H AZ e and Z e H BZ e are independent if and only if (i) Γ P AΓ P BΓ P = 0, (ii) Γ P AΓ P BΓ P m = Γ P BΓ P Am = 0, (iii) m H AΓ P BΓ P m = 0. Bastien Marchina (UM2) Complex random vectors 26 march / 49
25 Hermitian quadratic forms More results on hermitian quadratic forms Theorem Let Y CN d (0, Γ Y, P Y ) and Z CN d (0, Γ Z, P Z ), such that Y and Z are independent. Let Γ P,Y and Γ P,Z be the covariance-relation matrices of Y and Z respectively. Then, Y b H Γ + Y P,Y e e a Z H Γ + Z P,Z e e F(a, b), where a 2d is the rank of Γ P,Y, b 2d is the rank of Γ P,Z and F(a, b) denotes the Fisher distribution with degrees of freedom a and b. Bastien Marchina (UM2) Complex random vectors 26 march / 49
26 Hermitian quadratic forms More results on hermitian quadratic forms Corollary Let Z = (Z 1, Z 2 ) CN d1 +d 2 (0, Γ, P). Let Γ P,1 and Γ P,2 be the covariance-relation matrices of the marginal distributions of Z 1 and Z 2, with respective ranks a and b and a + b 2d Then, if and only if if Z 1 and Z 2 are independent. b a Z H 1 Γ + Z P,1 1 e e Z H 2 Γ + Z F(a, b), (22) P,2 2 e e Bastien Marchina (UM2) Complex random vectors 26 march / 49
27 Statistics of complex random vectors 1 Introduction 2 Complex random vectors 3 Hermitian quadratic forms in complex random vectors 4 Statistics of complex random vectors 5 Applications to goodness-of-t tests Bastien Marchina (UM2) Complex random vectors 26 march / 49
28 Statistics of complex random vectors Central limit theorem for complex random vectors Theorem Let W be a random vector in C d, E(W) = µ, with denite covariance and relation matrices Γ and P. Let W 1,..., W n be independant copies of W. Then, n 1 n W j µ CN d (0, Γ, P). (23) n j=1 Bastien Marchina (UM2) Complex random vectors 26 march / 49
29 Statistics of complex random vectors Maximum likelihood and moments method estimators Z 1,..., Z n is a random sample of CN(µ, Γ, P). Method of moments and maximum likelihood estimators for µ, Γ and P are identical. Parameter estimates ˆµ = 1 n Z k = Z, n ˆΓ = 1 n ˆP = 1 n ˆΓ P = 1 n k=1 n (Z k Z)(Z k Z) H, k=1 n (Z k Z)(Z k Z), k=1 n (Z k k=1 e Z e )(Z k e Z e ) H. Bastien Marchina (UM2) Complex random vectors 26 march / 49
30 Statistics of complex random vectors Asymptotics in the case d = 1 In this case, ˆµ µ n ˆγ γ CN 3 (0, Γ θ, P θ ), (24) ˆp p where γ 0 0 p 0 0 Γ θ = 0 γ 2 + p 2 2p γ, P θ = 0 γ 2 + p 2 pγ. (25) 0 2pγ 2γ 2 0 2pγ 2p 2 Bastien Marchina (UM2) Complex random vectors 26 march / 49
31 Applications to goodness-of-t tests 1 Introduction 2 Complex random vectors 3 Hermitian quadratic forms in complex random vectors 4 Statistics of complex random vectors 5 Applications to goodness-of-t tests Bastien Marchina (UM2) Complex random vectors 26 march / 49
32 Applications to goodness-of-t tests Empirical characteristic function and empirical characteristic process Let X 1,..., X n be a random sample with characteristic function ϕ X ( ). In order to test H 0 : ϕ X ( ) = ϕ 0 ( ) it is customary to intoduce the empirical characteristic process U n (t) = n(ϕ n (t) ϕ 0 (t)). (26) Bastien Marchina (UM2) Complex random vectors 26 march / 49
33 Applications to goodness-of-t tests The empirical characteristic process Under H 0, the empirical characteristic process is such that E(U n ( )) = 0 C(s, t) = E(U n (s)u n (t) ) = ϕ 0 (s t) ϕ 0 (s)ϕ 0 (t), (27) P(s, t) = E(U n (s)u n (t)) = ϕ 0 (s + t) ϕ 0 (s)ϕ 0 (t) = C(s, t). Bastien Marchina (UM2) Complex random vectors 26 march / 49
34 Applications to goodness-of-t tests Koutrouvelis's goodness-of-t test Koutrouvelis (1980) gives a test statistic of H 0 : P = P 0 based on the evaluation of U n ( ) on cleverly chosen points t 1,..., t d. With W n = (R 1,..., R d, I 1,... I d ), R k = Re(U n (t k )) and I k = Im(U n (t k )), he shows that W nσ 1 W n χ 2 2d. (28) under H 0 with Σ the covariance matrix of W n under H 0. Bastien Marchina (UM2) Complex random vectors 26 march / 49
35 Applications to goodness-of-t tests Use of a complex quadratic form Using a complex framework, we have the following test statistic ξ n = U n e H Γ + P U n e χ 2 q, (29) with U n = (U n, U H n ), U n = (U n (t 1 ),..., U n (t d )) and q < 2d is the rank of e Γ Un = Γ P = ( ΓUn P Un P H U n 0 1 C(t1, t1)... C(t1, tm) C(tm, t1)... C(tm, tm) C A, P Un = ), (30) Γ U n 0 1 P(t1, t1)... P(t1, tm) P(tm, t1)... P(tm, tm) C A (31) Bastien Marchina (UM2) Complex random vectors 26 march / 49
36 Applications to goodness-of-t tests Test of a simple hypothesis for complex distributions A sample of complex random vectors has the following e.c.f. ϕ n (ν) = 1 n n e i Re(νH Z ) k, (32) k=1 and U n ( ) has the same properties than in the real case. Once again, U n e H Γ + P U n e χ 2 q, (33) with U n = (U n (ν 1 ),..., U n (ν d )) and q < 2d is the rank of Γ P. Bastien Marchina (UM2) Complex random vectors 26 march / 49
37 Applications to goodness-of-t tests Test for complex normality Back to our signal processing problematic, we need to test the composite hypothesis H 0 : P { CN 1 (µ, γ, p), µ C, γ R, p C such that p < γ }. (34) The parameters are unknown and must be estimated in order to build a test statistic. Bastien Marchina (UM2) Complex random vectors 26 march / 49
38 Applications to goodness-of-t tests Test for complex normality From the circularized empirical characteristic process U n,y (ν) = n(ϕ n,y (ν) ϕ 0 (ν)), (35) where ϕ 0 (ν) is the c.f. of a CN 1 (0, 1, 0), Y = Γ 1/2 (Z µ ), and therefore Y CN P 1 (0, 1, 0). e e e we build U (ν) = n,ŷ n(ϕ (ν) ϕ n,ŷ 0(ν)), (36) where Ŷ is obtained through the m.l.e. estimates of the parameters. Bastien Marchina (UM2) Complex random vectors 26 march / 49
39 Applications to goodness-of-t tests Test for complex normality The modied ξ n test statistic follows Here U H n,ŷ = (U n,ŷ (ν 1),..., U n,ŷ (ν m)) ˆξ n = U e H n,ŷ Γ P(ν) 1 U e n,ŷ. (37) Γ P (ν), albeit complicated, does not depend on the true value of the parameters, but only on the choice of points. Bastien Marchina (UM2) Complex random vectors 26 march / 49
40 Applications to goodness-of-t tests Simulation : case m = 1 Tab.: Quantiles of the distribution of ˆξ n based on repetitions with m = 1 n E 0 V 0 Q 90 Q 95 Q χ Bastien Marchina (UM2) Complex random vectors 26 march / 49
41 Applications to goodness-of-t tests P-P plot for the case m = 1 p p plot for xi(1) vs. a chi2(2) distribition empirical cumulative distribution theoretical cumulative distribution Bastien Marchina (UM2) Complex random vectors 26 march / 49
42 Applications to goodness-of-t tests Simulation : case m = 3 Tab.: Quantiles of the distribution of ˆξ n based on repetitions with m = 3 n E 0 V 0 Q 90 Q 95 Q χ Bastien Marchina (UM2) Complex random vectors 26 march / 49
43 Applications to goodness-of-t tests P-P plot for the case m = 3 p p plot for xi(3) vs. a chi2(6) distribition empirical cumulative distribution theoretical cumulative distribution Bastien Marchina (UM2) Complex random vectors 26 march / 49
44 Applications to goodness-of-t tests Back to the real data Fig.: ˆξn applied to the datasets Obersved value for xi(3) using the FMRI dataset Sample number Observed value Bastien Marchina (UM2) Complex random vectors 26 march / 49
45 Applications to goodness-of-t tests Back to the real data Examining closely sets associated with high ˆξ n value, they show unusually high values in the rst and last observations. Here is the 5050-th dataset, ˆξ n = Fig.: fmri data representation 5050 th dataset : Modulus Modulus Bastien Marchina (UM2) Complex random vectors 26 march / 49
46 Applications to goodness-of-t tests Back to the real data We can build an image of the brain slice with colors depending on ˆξ n. Brain representation using xi(3) on the FMRI dataset y x Bastien Marchina (UM2) Complex random vectors 26 march / 49
47 Applications to goodness-of-t tests Back to the real data Alternatively, and using ˆξ n,. Brain representation using sqrt(xi(3)) on the FMRI dataset y x Bastien Marchina (UM2) Complex random vectors 26 march / 49
48 Applications to goodness-of-t tests 1 T. Adali, P. J. Schreier, L. L. Scharf Complex-Valued Signal Processing : The Proper Way to Deal With Impropriety, IEEE Transactions on Signal Processing, vol. 59, n.11, pp , November Rowe, D. B. and Logan, B. R. A complex way to compute fmri activation, NeuroImage, vol. 23, , I.A. Koutrouvelis, A Goodness-of-t Test of Simple Hypotheses Based on the Empirical Characteristic Function, Biometrika, vol. 67, n. 1, pp A. van den Bos, The Multivariate Complex Normal Distribution - a Generalization, IEEE Transactions on Information Theory, vol. 31, pp , R.A. Wooding, The Multivariate Complex Distribution of Complex Normal Variables, Biometrika, vol. 43, pp , Bastien Marchina (UM2) Complex random vectors 26 march / 49
49 Applications to goodness-of-t tests Thank you for your attention Bastien Marchina (UM2) Complex random vectors 26 march / 49
Gilles R. Ducharme Pierre Lafaye de Micheaux Bastien Marchina
Ann Inst Stat Math (206) 68:77 04 DOI 0.007/s0463-04-0486-5 The complex multinormal distribution, quadratic forms in complex random vectors and an omnibus goodness-of-fit test for the complex normal distribution
More informationThe purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.
Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That
More informationFinite Singular Multivariate Gaussian Mixture
21/06/2016 Plan 1 Basic definitions Singular Multivariate Normal Distribution 2 3 Plan Singular Multivariate Normal Distribution 1 Basic definitions Singular Multivariate Normal Distribution 2 3 Multivariate
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Multivariate Gaussians Mark Schmidt University of British Columbia Winter 2019 Last Time: Multivariate Gaussian http://personal.kenyon.edu/hartlaub/mellonproject/bivariate2.html
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationA Bayesian Treatment of Linear Gaussian Regression
A Bayesian Treatment of Linear Gaussian Regression Frank Wood December 3, 2009 Bayesian Approach to Classical Linear Regression In classical linear regression we have the following model y β, σ 2, X N(Xβ,
More informationPublication VII Institute of Electrical and Electronics Engineers (IEEE)
Publication VII Esa Ollila and Visa Koivunen. 2009. Adjusting the generalized likelihood ratio test of circularity robust to non normality. In: Proceedings of the 10th IEEE Workshop on Signal Processing
More informationMultivariate Linear Models
Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements
More informationRegression #5: Confidence Intervals and Hypothesis Testing (Part 1)
Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose
More informationPart 6: Multivariate Normal and Linear Models
Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of
More informationRestricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model
Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives
More informationLikelihood Ratio tests
Likelihood Ratio tests For general composite hypotheses optimality theory is not usually successful in producing an optimal test. instead we look for heuristics to guide our choices. The simplest approach
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More informationIntroduction to Normal Distribution
Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction
More informationRobust covariance matrices estimation and applications in signal processing
Robust covariance matrices estimation and applications in signal processing F. Pascal SONDRA/Supelec GDR ISIS Journée Estimation et traitement statistique en grande dimension May 16 th, 2013 FP (SONDRA/Supelec)
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationPublication VI. Esa Ollila On the circularity of a complex random variable. IEEE Signal Processing Letters, volume 15, pages
Publication VI Esa Ollila 2008 On the circularity of a complex rom variable IEEE Signal Processing Letters, volume 15, pages 841 844 2008 Institute of Electrical Electronics Engineers (IEEE) Reprinted,
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationSection 8.1. Vector Notation
Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS
Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationExam 2. Jeremy Morris. March 23, 2006
Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following
More informationBIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84
Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.
More informationStatistical Inference with Monotone Incomplete Multivariate Normal Data
Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationLinear Regression and Discrimination
Linear Regression and Discrimination Kernel-based Learning Methods Christian Igel Institut für Neuroinformatik Ruhr-Universität Bochum, Germany http://www.neuroinformatik.rub.de July 16, 2009 Christian
More informationarxiv: v1 [math.st] 13 Dec 2016
BEST WIDELY LINEAR UNBIASED ESTIMATOR FOR REAL VALUED PARAMETER VECTORS O. Lang and M. Huemer arxiv:1612.04060v1 [math.st 13 Dec 2016 ABSTRACT For classical estimation with an underlying linear model the
More informationMultivariate Gaussian Analysis
BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density
More informationModelling of Dependent Credit Rating Transitions
ling of (Joint work with Uwe Schmock) Financial and Actuarial Mathematics Vienna University of Technology Wien, 15.07.2010 Introduction Motivation: Volcano on Iceland erupted and caused that most of the
More informationGaussian Models (9/9/13)
STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate
More informationTests for separability in nonparametric covariance operators of random surfaces
Tests for separability in nonparametric covariance operators of random surfaces Shahin Tavakoli (joint with John Aston and Davide Pigoli) April 19, 2016 Analysis of Multidimensional Functional Data Shahin
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationGeneralized Linear Models. Kurt Hornik
Generalized Linear Models Kurt Hornik Motivation Assuming normality, the linear model y = Xβ + e has y = β + ε, ε N(0, σ 2 ) such that y N(μ, σ 2 ), E(y ) = μ = β. Various generalizations, including general
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationCh4. Distribution of Quadratic Forms in y
ST4233, Linear Models, Semester 1 2008-2009 Ch4. Distribution of Quadratic Forms in y 1 Definition Definition 1.1 If A is a symmetric matrix and y is a vector, the product y Ay = i a ii y 2 i + i j a ij
More informationRandom Vectors and Multivariate Normal Distributions
Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each
More informationPhysics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester
Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationA Few Special Distributions and Their Properties
A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution
More informationUniversity of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries
University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationLecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011
Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationMATH5745 Multivariate Methods Lecture 07
MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More informationCanonical Correlation Analysis of Longitudinal Data
Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationThe largest eigenvalues of the sample covariance matrix. in the heavy-tail case
The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)
More informationSection 5.8. (i) ( 3 + i)(14 2i) = ( 3)(14 2i) + i(14 2i) = {( 3)14 ( 3)(2i)} + i(14) i(2i) = ( i) + (14i + 2) = i.
1. Section 5.8 (i) ( 3 + i)(14 i) ( 3)(14 i) + i(14 i) {( 3)14 ( 3)(i)} + i(14) i(i) ( 4 + 6i) + (14i + ) 40 + 0i. (ii) + 3i 1 4i ( + 3i)(1 + 4i) (1 4i)(1 + 4i) (( + 3i) + ( + 3i)(4i) 1 + 4 10 + 11i 10
More informationAnalysis of Experimental Designs
Analysis of Experimental Designs p. 1/? Analysis of Experimental Designs Gilles Lamothe Mathematics and Statistics University of Ottawa Analysis of Experimental Designs p. 2/? Review of Probability A short
More informationChapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix
Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When
More informationIndependent Components Analysis
CS229 Lecture notes Andrew Ng Part XII Independent Components Analysis Our next topic is Independent Components Analysis (ICA). Similar to PCA, this will find a new basis in which to represent our data.
More informationSo far our focus has been on estimation of the parameter vector β in the. y = Xβ + u
Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,
More information2 Functions of random variables
2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as
More informationMULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran
MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING Kaitlyn Beaudet and Douglas Cochran School of Electrical, Computer and Energy Engineering Arizona State University, Tempe AZ 85287-576 USA ABSTRACT The problem
More informationThe linear model is the most fundamental of all serious statistical models encompassing:
Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x
More informationQuantile prediction of a random eld extending the gaussian setting
Quantile prediction of a random eld extending the gaussian setting 1 Joint work with : Véronique Maume-Deschamps 1 and Didier Rullière 2 1 Institut Camille Jordan Université Lyon 1 2 Laboratoire des Sciences
More informationAppendix 2. The Multivariate Normal. Thus surfaces of equal probability for MVN distributed vectors satisfy
Appendix 2 The Multivariate Normal Draft Version 1 December 2000, c Dec. 2000, B. Walsh and M. Lynch Please email any comments/corrections to: jbwalsh@u.arizona.edu THE MULTIVARIATE NORMAL DISTRIBUTION
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationJoint work with Nottingham colleagues Simon Preston and Michail Tsagris.
/pgf/stepx/.initial=1cm, /pgf/stepy/.initial=1cm, /pgf/step/.code=1/pgf/stepx/.expanded=- 10.95415pt,/pgf/stepy/.expanded=- 10.95415pt, /pgf/step/.value required /pgf/images/width/.estore in= /pgf/images/height/.estore
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationCS 195-5: Machine Learning Problem Set 1
CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of
More informationComposite Hypotheses and Generalized Likelihood Ratio Tests
Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve
More informationNonparametric Regression With Gaussian Processes
Nonparametric Regression With Gaussian Processes From Chap. 45, Information Theory, Inference and Learning Algorithms, D. J. C. McKay Presented by Micha Elsner Nonparametric Regression With Gaussian Processes
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationStein s Method and Characteristic Functions
Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method
More informationSampling Distributions
Merlise Clyde Duke University September 8, 2016 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);
More informationLecture 20: Linear model, the LSE, and UMVUE
Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationECON 4160, Autumn term Lecture 1
ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least
More informationStatement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.
MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss
More informationEconomics 620, Lecture 5: exp
1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationCross-Validation with Confidence
Cross-Validation with Confidence Jing Lei Department of Statistics, Carnegie Mellon University WHOA-PSI Workshop, St Louis, 2017 Quotes from Day 1 and Day 2 Good model or pure model? Occam s razor We really
More informationPerformance Analysis of Coarray-Based MUSIC and the Cramér-Rao Bound
Performance Analysis of Coarray-Based MUSIC and the Cramér-Rao Bound Mianzhi Wang, Zhen Zhang, and Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in
More informationLecture 15. Hypothesis testing in the linear model
14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationTesting a Normal Covariance Matrix for Small Samples with Monotone Missing Data
Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical
More informationCross-Validation with Confidence
Cross-Validation with Confidence Jing Lei Department of Statistics, Carnegie Mellon University UMN Statistics Seminar, Mar 30, 2017 Overview Parameter est. Model selection Point est. MLE, M-est.,... Cross-validation
More information[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2
MATH Key for sample nal exam, August 998 []. (a) Dene the term \reduced row-echelon matrix". A matrix is reduced row-echelon if the following conditions are satised. every zero row lies below every nonzero
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationStat 502 Design and Analysis of Experiments General Linear Model
1 Stat 502 Design and Analysis of Experiments General Linear Model Fritz Scholz Department of Statistics, University of Washington December 6, 2013 2 General Linear Hypothesis We assume the data vector
More informationCopula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011
Copula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011 Outline Ordinary Least Squares (OLS) Regression Generalized Linear Models
More informationQualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama
Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationPh.D. Qualifying Exam Friday Saturday, January 3 4, 2014
Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently
More informationSTA 2101/442 Assignment 3 1
STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance
More informationADMISSIONS EXERCISE. MSc in Mathematical and Computational Finance. For entry 2018
MScMCF 2017 ADMISSIONS EXERCISE MSc in Mathematical and Computational Finance For entry 2018 The questions are based on Linear Algebra, Calculus, Probability, Partial Differential Equations, and Algorithms.
More informationMultinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is
Multinomial Data The multinomial distribution is a generalization of the binomial for the situation in which each trial results in one and only one of several categories, as opposed to just two, as in
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationGraduate Econometrics I: Asymptotic Theory
Graduate Econometrics I: Asymptotic Theory Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Asymptotic Theory
More informationAsymptotic Statistics-VI. Changliang Zou
Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous
More information