Wilks Λ and Hotelling s T 2.
|
|
- Ashley Jeffry Harper
- 5 years ago
- Views:
Transcription
1 Wilks Λ and. Steffen Lauritzen, University of Oxford BS2 Statistical Inference, Lecture 13, Hilary Term 2008 March 2, 2008
2 If X and Y are independent, X Γ(α x, γ), and Y Γ(α y, γ), then the ratio X /(X + Y ) follows a Beta distribution: B = X X + Y B(α x, α y ). A multivariate analogue of this result involves the Wishart distribution and asserts. If W 1 W d (f 1, Σ) and W 2 W d (f 2, Σ) with f 1 d, then the distribution of det(w 1 ) Λ = det(w 1 + W 2 ) does not depend on Σ and is denoted by Λ(d, f 1, f 2 ). distribution is known as Wilks distribution. The
3 To see that the distribution of Λ does not depend on Σ, we choose a matrix A such that AΣA = I d. Then and Λ = W i = AW i A W d (f i, I d ) det( W 1 ) det( W 1 + W 2 ) = det(a) det(w 1 ) det(a ) det(a) det(w 1 + W 2 ) det(a ) = Λ. Clearly, the distribution of Λ does not depend on Σ and as Λ = Λ this also holds for the latter.
4 Wilks distribution is closely related to the Beta distribution. It holds that d Λ = D where B i are independent and follow Beta distributions with B i B{(f i)/2, f 2 /2)}. Indeed the distribution of (W 1 + W 2 ) 1 W 1 is also known as the multivariate Beta distribution. i=1 B i
5 We first need a useful result about determinants of block matrices. If A is a d d symmetric matrix partitioned into blocks of dimension r r, r s, and s s as ( ) A11 A A = 12, A 21 A 22 it holds that det A = det(a 11 A 12 A 1 22 A 21) det(a 22 ). (1) Here the entire expression should be considered equal to 0 if A 22 is not invertible and det(a 22 ) = 0.
6 This follows from a simple calculation ( A11 A det(a) = det 12 A 21 A 22 = det ) det ( I r r A 1 ( A11 A 12 A 1 22 A 21 A 12 0 s r A 22 = det(a 11 A 12 A 1 22 A 21) det(a 22 ). 0 r s 22 A 21 I s s ) )
7 Consider a partitioning of W and Σ into blocks as ( ) ( W11 W W = 12 Σ11 Σ, Σ = 12 W 21 W 22 Σ 21 Σ 22 ), where Σ 11 is an r r matrix, Σ 22 is s s, etc. If W W d (f, Σ) and Σ 12 = Σ 21 = 0 then det(w ) Λ(r, f s, s) = Λ(s, f r, r). det(w 11 ) det(w 22 )
8 To see this is true we first use the matrix identity (1) to write det(w ) det(w 11 ) det(w 22 ) = det(w 1 2) det(w 11 ) = det(w 1 2 ) det(w W 12 W22 1 W 21), where W 1 2 = W 11 W 12 W22 1 W 21. Next we need to use that if Σ 12 = 0 and thus Σ 1 2 = Σ 11, it further holds that W 1 2 and W 12 W22 1 W 21 are independent and both Wishart distributed as W 1 2 W r (f s, Σ 11 ), W 12 W 1 22 W 21 W r (s, Σ 11 ). We abstain from giving further details.
9 Wilks distribution occurs as the likelihood ratio test for independence. Consider X 1,..., X n N d (0, Σ). The likelihood function is L(K) = (det K) n/2 e tr(kw )/2. As this is maximized by we have If Σ 12 = 0 we similarly have ˆK = nw 1 L( ˆK) = (det W ) n/2 e nd/2. L( ˆK 11, ˆK 22 ) = (det W 11 ) n/2 e nr/2 (det W 22 ) n/2 e ns/2. Hence the likelihood ratio statistic is { } L( ˆK 11, ˆK 22 ) det(w ) n/2 = = Λ L( ˆK) n/2. det(w 11 ) det(w 22 )
10 and relation to Wilks Λ Relation to Fisher s F Let Y N d (µ, cσ) and W W d (f, Σ) with f d, and Y W. Then T 2 = f (Y µ) W 1 (Y µ)/c is known as. This is the multivariate analogue of Student s t (or rather t 2 ). It is equivalent to the likelihood ratio statistic for testing µ = 0 from a sample X 1,..., X n where then Y = X, W = i (X i X ), f = n 1, and c = 1/n. It holds that T 2 /f Λ(d, f, 1) = Λ(1, f d + 1, d).
11 and relation to Wilks Λ Relation to Fisher s F To see this we exploit the matrix identity (1) and calculate a determinant in two different ways. We may without loss of generality let µ = 0. We have ( ) W Y / c det Y / = det(w + YY /c) 1, c 1 But we also have ( W Y / c det Y / c 1 ) = det(1 + Y W 1 Y /c) det W = (1 + Y W 1 Y /c) det W = (1 + T 2 /f ) det W.
12 and relation to Wilks Λ Relation to Fisher s F Hence T 2 /f = Y W 1 Y /c = det W det(w + YY /c). The result now follows by noting that Y N d (0, cσ) implies YY /c W d (1, Σ). Since Λ(d, f, 1) = Λ(1, f d + 1, d) and the latter is a Beta distribution, it also holds that f d + 1 T 2 F (d, f + 1 d) fd where F denotes Fisher s F -distribution.
Inverse Wishart Distribution and Conjugate Bayesian Analysis
Inverse Wishart Distribution and Conjugate Bayesian Analysis BS2 Statistical Inference, Lecture 14, Hilary Term 2008 March 2, 2008 Definition Testing for independence Hotelling s T 2 If W 1 W d (f 1, Σ)
More informationMultivariate Gaussian Analysis
BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density
More informationDecomposable and Directed Graphical Gaussian Models
Decomposable Decomposable and Directed Graphical Gaussian Models Graphical Models and Inference, Lecture 13, Michaelmas Term 2009 November 26, 2009 Decomposable Definition Basic properties Wishart density
More informationLikelihood Analysis of Gaussian Graphical Models
Faculty of Science Likelihood Analysis of Gaussian Graphical Models Ste en Lauritzen Department of Mathematical Sciences Minikurs TUM 2016 Lecture 2 Slide 1/43 Overview of lectures Lecture 1 Markov Properties
More informationExam 2. Jeremy Morris. March 23, 2006
Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following
More informationBayesian Model Comparison
BS2 Statistical Inference, Lecture 11, Hilary Term 2009 February 26, 2009 Basic result An accurate approximation Asymptotic posterior distribution An integral of form I = b a e λg(y) h(y) dy where h(y)
More informationPROBABILITY DISTRIBUTIONS. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception
PROBABILITY DISTRIBUTIONS Credits 2 These slides were sourced and/or modified from: Christopher Bishop, Microsoft UK Parametric Distributions 3 Basic building blocks: Need to determine given Representation:
More informationStat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota
Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution Charles J. Geyer School of Statistics University of Minnesota 1 The Dirichlet Distribution The Dirichlet Distribution is to the beta distribution
More informationA Few Special Distributions and Their Properties
A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution
More informationStructure estimation for Gaussian graphical models
Faculty of Science Structure estimation for Gaussian graphical models Steffen Lauritzen, University of Copenhagen Department of Mathematical Sciences Minikurs TUM 2016 Lecture 3 Slide 1/48 Overview of
More informationProofs for Quizzes. Proof. Suppose T is a linear transformation, and let A be a matrix such that T (x) = Ax for all x R m. Then
Proofs for Quizzes 1 Linear Equations 2 Linear Transformations Theorem 1 (2.1.3, linearity criterion). A function T : R m R n is a linear transformation if and only if a) T (v + w) = T (v) + T (w), for
More informationHypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004
Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),
More informationAn Introduction to Multivariate Statistical Analysis
An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents
More information1. Density and properties Brief outline 2. Sampling from multivariate normal and MLE 3. Sampling distribution and large sample behavior of X and S 4.
Multivariate normal distribution Reading: AMSA: pages 149-200 Multivariate Analysis, Spring 2016 Institute of Statistics, National Chiao Tung University March 1, 2016 1. Density and properties Brief outline
More informationStudentization and Prediction in a Multivariate Normal Setting
Studentization and Prediction in a Multivariate Normal Setting Morris L. Eaton University of Minnesota School of Statistics 33 Ford Hall 4 Church Street S.E. Minneapolis, MN 55455 USA eaton@stat.umn.edu
More information1 Uniform Distribution. 2 Gamma Distribution. 3 Inverse Gamma Distribution. 4 Multivariate Normal Distribution. 5 Multivariate Student-t Distribution
A Few Special Distributions Their Properties Econ 675 Iowa State University November 1 006 Justin L Tobias (ISU Distributional Catalog November 1 006 1 / 0 Special Distributions Their Associated Properties
More informationGraphical Models with Symmetry
Wald Lecture, World Meeting on Probability and Statistics Istanbul 2012 Sparse graphical models with few parameters can describe complex phenomena. Introduce symmetry to obtain further parsimony so models
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationPart 6: Multivariate Normal and Linear Models
Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationENGR-1100 Introduction to Engineering Analysis. Lecture 21. Lecture outline
ENGR-1100 Introduction to Engineering Analysis Lecture 21 Lecture outline Procedure (algorithm) for finding the inverse of invertible matrix. Investigate the system of linear equation and invertibility
More informationENGR-1100 Introduction to Engineering Analysis. Lecture 21
ENGR-1100 Introduction to Engineering Analysis Lecture 21 Lecture outline Procedure (algorithm) for finding the inverse of invertible matrix. Investigate the system of linear equation and invertibility
More informationElements of Graphical Models DRAFT.
Steffen L. Lauritzen Elements of Graphical Models DRAFT. Lectures from the XXXVIth International Probability Summer School in Saint-Flour, France, 2006 December 2, 2009 Springer Contents 1 Introduction...................................................
More informationMATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.
MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry
More informationLecture 5: Hypothesis tests for more than one sample
1/23 Lecture 5: Hypothesis tests for more than one sample Måns Thulin Department of Mathematics, Uppsala University thulin@math.uu.se Multivariate Methods 8/4 2011 2/23 Outline Paired comparisons Repeated
More informationDimension. Eigenvalue and eigenvector
Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,
More informationA CLASS OF ORTHOGONALLY INVARIANT MINIMAX ESTIMATORS FOR NORMAL COVARIANCE MATRICES PARAMETRIZED BY SIMPLE JORDAN ALGEBARAS OF DEGREE 2
Journal of Statistical Studies ISSN 10-4734 A CLASS OF ORTHOGONALLY INVARIANT MINIMAX ESTIMATORS FOR NORMAL COVARIANCE MATRICES PARAMETRIZED BY SIMPLE JORDAN ALGEBARAS OF DEGREE Yoshihiko Konno Faculty
More informationFinal Examination 201-NYC-05 - Linear Algebra I December 8 th, and b = 4. Find the value(s) of a for which the equation Ax = b
Final Examination -NYC-5 - Linear Algebra I December 8 th 7. (4 points) Let A = has: (a) a unique solution. a a (b) infinitely many solutions. (c) no solution. and b = 4. Find the value(s) of a for which
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 9 for Applied Multivariate Analysis Outline Addressing ourliers 1 Addressing ourliers 2 Outliers in Multivariate samples (1) For
More informationMULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT)
MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García Comunicación Técnica No I-07-13/11-09-2007 (PE/CIMAT) Multivariate analysis of variance under multiplicity José A. Díaz-García Universidad
More informationInferences about a Mean Vector
Inferences about a Mean Vector Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University
More informationHomework 11/Solutions. (Section 6.8 Exercise 3). Which pairs of the following vector spaces are isomorphic?
MTH 9-4 Linear Algebra I F Section Exercises 6.8,4,5 7.,b 7.,, Homework /Solutions (Section 6.8 Exercise ). Which pairs of the following vector spaces are isomorphic? R 7, R, M(, ), M(, 4), M(4, ), P 6,
More informationMultivariate Regression (Chapter 10)
Multivariate Regression (Chapter 10) This week we ll cover multivariate regression and maybe a bit of canonical correlation. Today we ll mostly review univariate multivariate regression. With multivariate
More informationPartitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12
Partitioned Covariance Matrices and Partial Correlations Proposition 1 Let the (p + q (p + q covariance matrix C > 0 be partitioned as ( C11 C C = 12 C 21 C 22 Then the symmetric matrix C > 0 has the following
More informationGraphical Gaussian Models with Edge and Vertex Symmetries
Graphical Gaussian Models with Edge and Vertex Symmetries Søren Højsgaard Aarhus University, Denmark Steffen L. Lauritzen University of Oxford, United Kingdom Summary. In this paper we introduce new types
More informationM.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry
M.A.P. Matrix Algebra Procedures by Mary Donovan, Adrienne Copeland, & Patrick Curry This document provides an easy to follow background and review of basic matrix definitions and algebra. Because population
More informationSequential Bayesian Updating
BS2 Statistical Inference, Lectures 14 and 15, Hilary Term 2009 May 28, 2009 We consider data arriving sequentially X 1,..., X n,... and wish to update inference on an unknown parameter θ online. In a
More informationMachine Learning for Data Science (CS4786) Lecture 12
Machine Learning for Data Science (CS4786) Lecture 12 Gaussian Mixture Models Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016fa/ Back to K-means Single link is sensitive to outliners We
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables
More informationOn matrix variate Dirichlet vectors
On matrix variate Dirichlet vectors Konstencia Bobecka Polytechnica, Warsaw, Poland Richard Emilion MAPMO, Université d Orléans, 45100 Orléans, France Jacek Wesolowski Polytechnica, Warsaw, Poland Abstract
More informationUniversity of Ottawa
University of Ottawa Department of Mathematics and Statistics MAT 30B: Mathematical Methods II Instructor: Alistair Savage Second Midterm Test Solutions White Version 3 March 0 Surname First Name Student
More informationLecture 32: Asymptotic confidence sets and likelihoods
Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence
More informationAssignment 11 (C + C ) = (C + C ) = (C + C) i(c C ) ] = i(c C) (AB) = (AB) = B A = BA 0 = [A, B] = [A, B] = (AB BA) = (AB) AB
Arfken 3.4.6 Matrix C is not Hermition. But which is Hermitian. Likewise, Assignment 11 (C + C ) = (C + C ) = (C + C) [ i(c C ) ] = i(c C ) = i(c C) = i ( C C ) Arfken 3.4.9 The matrices A and B are both
More informationQuestion: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?
Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues
More informationDIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix
DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that
More informationMultivariate Analysis Homework 1
Multivariate Analysis Homework A490970 Yi-Chen Zhang March 6, 08 4.. Consider a bivariate normal population with µ = 0, µ =, σ =, σ =, and ρ = 0.5. a Write out the bivariate normal density. b Write out
More informationELEC E7210: Communication Theory. Lecture 10: MIMO systems
ELEC E7210: Communication Theory Lecture 10: MIMO systems Matrix Definitions, Operations, and Properties (1) NxM matrix a rectangular array of elements a A. an 11 1....... a a 1M. NM B D C E ermitian transpose
More informationA MCMC Approach for Learning the Structure of Gaussian Acyclic Directed Mixed Graphs
A MCMC Approach for Learning the Structure of Gaussian Acyclic Directed Mixed Graphs Ricardo Silva Abstract Graphical models are widely used to encode conditional independence constraints and causal assumptions,
More informationCurve Fitting Re-visited, Bishop1.2.5
Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the
More informationApplied Multivariate Statistical Analysis Richard Johnson Dean Wichern Sixth Edition
Applied Multivariate Statistical Analysis Richard Johnson Dean Wichern Sixth Edition Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world
More informationMath 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!
Math 415 Exam I Calculators, books and notes are not allowed! Name: Student ID: Score: Math 415 Exam I (20pts) 1. Let A be a square matrix satisfying A 2 = 2A. Find the determinant of A. Sol. From A 2
More informationDeterminants and Scalar Multiplication
Invertibility and Properties of Determinants In a previous section, we saw that the trace function, which calculates the sum of the diagonal entries of a square matrix, interacts nicely with the operations
More informationINVARIANCE OF THE LAPLACE OPERATOR.
INVARIANCE OF THE LAPLACE OPERATOR. The goal of this handout is to give a coordinate-free proof of the invariance of the Laplace operator under orthogonal transformations of R n (and to explain what this
More informationConditional Independence and Markov Properties
Conditional Independence and Markov Properties Lecture 1 Saint Flour Summerschool, July 5, 2006 Steffen L. Lauritzen, University of Oxford Overview of lectures 1. Conditional independence and Markov properties
More information1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal
. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P
More informationMATH 33A LECTURE 3 PRACTICE MIDTERM I
MATH A LECTURE PRACTICE MIDTERM I Please note: Show your work Correct answers not accompanied by sufficent explanations will receive little or no credit (except on multiple-choice problems) Please call
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationTesting Equality of Natural Parameters for Generalized Riesz Distributions
Testing Equality of Natural Parameters for Generalized Riesz Distributions Jesse Crawford Department of Mathematics Tarleton State University jcrawford@tarleton.edu faculty.tarleton.edu/crawford April
More informationMath 313 (Linear Algebra) Exam 2 - Practice Exam
Name: Student ID: Section: Instructor: Math 313 (Linear Algebra) Exam 2 - Practice Exam Instructions: For questions which require a written answer, show all your work. Full credit will be given only if
More informationMatrix Operations: Determinant
Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant
More informationHigh-dimensional asymptotic expansions for the distributions of canonical correlations
Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic
More informationMAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.
MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices
More informationBayesian Asymptotics
BS2 Statistical Inference, Lecture 8, Hilary Term 2008 May 7, 2008 The univariate case The multivariate case For large λ we have the approximation I = b a e λg(y) h(y) dy = e λg(y ) h(y ) 2π λg (y ) {
More informationDecomposable Graphical Gaussian Models
CIMPA Summerschool, Hammamet 2011, Tunisia September 12, 2011 Basic algorithm This simple algorithm has complexity O( V + E ): 1. Choose v 0 V arbitrary and let v 0 = 1; 2. When vertices {1, 2,..., j}
More informationUSEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*
USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate
More informationTEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX.
Serdica Math J 34 (008, 509 530 TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX Evelina Veleva Communicated by N Yanev Abstract
More informationFinal EXAM Preparation Sheet
Final EXAM Preparation Sheet M369 Fall 217 1 Key concepts The following list contains the main concepts and ideas that we have explored this semester. For each concept, make sure that you remember about
More informationFundamental Probability and Statistics
Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are
More informationMultivariate Linear Models
Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationExponential Families and Bayesian Inference
Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where
More informationNuisance parameters and their treatment
BS2 Statistical Inference, Lecture 2, Hilary Term 2008 April 2, 2008 Ancillarity Inference principles Completeness A statistic A = a(x ) is said to be ancillary if (i) The distribution of A does not depend
More informationComputational Foundations of Cognitive Science. Definition of the Inverse. Definition of the Inverse. Definition Computation Properties
al Foundations of Cognitive Science Lecture 1: s; Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February, 010 1 Reading: Anton and Busby, Chs..,.6 Frank Keller al Foundations
More informationECE285/SIO209, Machine learning for physical applications, Spring 2017
ECE285/SIO209, Machine learning for physical applications, Spring 2017 Peter Gerstoft, 534-7768, gerstoft@ucsd.edu We meet Wednesday from 5 to 6:20pm in Spies Hall 330 Text Bishop Grading A or maybe S
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More informationOptimization. Sherif Khalifa. Sherif Khalifa () Optimization 1 / 50
Sherif Khalifa Sherif Khalifa () Optimization 1 / 50 Y f(x 0 ) Y=f(X) X 0 X Sherif Khalifa () Optimization 2 / 50 Y Y=f(X) f(x 0 ) X 0 X Sherif Khalifa () Optimization 3 / 50 A necessary condition for
More information8 Extremal Values & Higher Derivatives
8 Extremal Values & Higher Derivatives Not covered in 2016\17 81 Higher Partial Derivatives For functions of one variable there is a well-known test for the nature of a critical point given by the sign
More informationLecture 18: Section 4.3
Lecture 18: Section 4.3 Shuanglin Shao November 6, 2013 Linear Independence and Linear Dependence. We will discuss linear independence of vectors in a vector space. Definition. If S = {v 1, v 2,, v r }
More informationNear-exact approximations for the likelihood ratio test statistic for testing equality of several variance-covariance matrices
Near-exact approximations for the likelihood ratio test statistic for testing euality of several variance-covariance matrices Carlos A. Coelho 1 Filipe J. Marues 1 Mathematics Department, Faculty of Sciences
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2
Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate
More informationThe singular value of A + B and αa + βb
An. Ştiinţ. Univ. Al. I. Cuza Iaşi Mat. (N.S.) Tomul LXII, 2016, f. 2, vol. 3 The singular value of A + B and αa + βb Bogdan D. Djordjević Received: 16.II.2015 / Revised: 3.IV.2015 / Accepted: 9.IV.2015
More informationBayesian Models in Machine Learning
Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of
More informationRandom Orthogonal Matrices
CHAPTER 7 Random Orthogonal Matrices Orthogonal matrices, both fixed and random, play an important role in much of statistics, especially in multivariate analysis. Connections between the orthogonal group
More informationDeterminants and Scalar Multiplication
Properties of Determinants In the last section, we saw how determinants interact with the elementary row operations. There are other operations on matrices, though, such as scalar multiplication, matrix
More informationReview of linear algebra
Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of
More informationSignature. Printed Name. Math 312 Hour Exam 1 Jerry L. Kazdan March 5, :00 1:20
Signature Printed Name Math 312 Hour Exam 1 Jerry L. Kazdan March 5, 1998 12:00 1:20 Directions: This exam has three parts. Part A has 4 True-False questions, Part B has 3 short answer questions, and Part
More informationGaussian Graphical Models and Graphical Lasso
ELE 538B: Sparsity, Structure and Inference Gaussian Graphical Models and Graphical Lasso Yuxin Chen Princeton University, Spring 2017 Multivariate Gaussians Consider a random vector x N (0, Σ) with pdf
More informationPrinciples of the Global Positioning System Lecture 11
12.540 Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring http://geoweb.mit.edu/~tah/12.540 Statistical approach to estimation Summary Look at estimation from statistical point
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationStatistical Inference with Monotone Incomplete Multivariate Normal Data
Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful
More informationMATH 307 Test 1 Study Guide
MATH 37 Test 1 Study Guide Theoretical Portion: No calculators Note: It is essential for you to distinguish between an entire matrix C = (c i j ) and a single element c i j of the matrix. For example,
More informationModel comparison and selection
BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)
More informationEK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016
EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 Answer the questions in the spaces provided on the question sheets. You must show your work to get credit for your answers. There will
More informationMiderm II Solutions To find the inverse we row-reduce the augumented matrix [I A]. In our case, we row reduce
Miderm II Solutions Problem. [8 points] (i) [4] Find the inverse of the matrix A = To find the inverse we row-reduce the augumented matrix [I A]. In our case, we row reduce We have A = 2 2 (ii) [2] Possibly
More informationCollapsed Gibbs Sampler for Dirichlet Process Gaussian Mixture Models (DPGMM)
Collapsed Gibbs Sampler for Dirichlet Process Gaussian Mixture Models (DPGMM) Rajarshi Das Language Technologies Institute School of Computer Science Carnegie Mellon University rajarshd@cs.cmu.edu Sunday
More informationDeep Variational Inference. FLARE Reading Group Presentation Wesley Tansey 9/28/2016
Deep Variational Inference FLARE Reading Group Presentation Wesley Tansey 9/28/2016 What is Variational Inference? What is Variational Inference? Want to estimate some distribution, p*(x) p*(x) What is
More informationA simple analysis of the exact probability matching prior in the location-scale model
A simple analysis of the exact probability matching prior in the location-scale model Thomas J. DiCiccio Department of Social Statistics, Cornell University Todd A. Kuffner Department of Mathematics, Washington
More informationTotal positivity in Markov structures
1 based on joint work with Shaun Fallat, Kayvan Sadeghi, Caroline Uhler, Nanny Wermuth, and Piotr Zwiernik (arxiv:1510.01290) Faculty of Science Total positivity in Markov structures Steffen Lauritzen
More informationPredicting Retention Rates from Placement Exam Scores
Predicting Retention Rates from Placement Exam Scores Dr. Michael S. Pilant, Dept. of Mathematics, Texas A&M University Dr. Robert Hall, Dept. of Ed. Psychology, Texas A&M University Amy Austin, Senior
More information