Gaussian representation of a class of Riesz probability distributions
|
|
- Cecil Foster
- 6 years ago
- Views:
Transcription
1 arxiv:763v [mathpr] 8 Dec 7 Gaussian representation of a class of Riesz probability distributions A Hassairi Sfax University Tunisia Running title: Gaussian representation of a Riesz distribution Abstract : The Wishart probability distribution on symmetric matrices has been initially defined by mean of the multivariate Gaussian distribution as an of the chi-square distribution A more general definition is given using results for harmonic analysis Recently a probability distribution on symmetric matrices called the Riesz distribution has been defined by its Laplace transform as a generalization of the Wishart distribution The aim of the present paper is to show that some Riesz probability distributions which are not necessarily Wishart may also be presented by mean of the Gaussian distribution using Gaussian samples with missing data Keywords: Chi-square distribution, Gaussian distribution, Wishart probability distribution, Riesz probability distribution, Laplace transform AMS Classification : 6B, 6B5, 6B Introduction Let V be the space of real symmetric r r matrices, Ω be the cone of positive definite and Ω the cone of positive semi-definite elements of V We denote the identity matrix by e, the trace of an element x of V by trx and its determinant by x We equip V by the scalar product θ,x = trθx Denoting L µ θ = V e θ,x µdx the Laplace transform of a measure µ, it is well known see Faraut and Korányi 994 that there exists a positive measure µ p on Ω such that the Laplace transform of µ p exists for θ Ω and is equal to exptrθxµ p dx = p θ, Ω Corresponding author address: abdelhamidhassairi@fssrnutn
2 if and only if p is in the so called Gindikin set Λ = { } r r,,,,+ When p > r, the measure µ p is absolutely continuous with respect to the Lebesgue measure and is given by where µ p dx = Γ Ω p p r+ x Ω xdx, Γ Ω p = π n r r k= Γp k, where n is the dimension of V And when p = j with j integer, j r, the measure µ p is singular concentrated on the set of elements of rank j of Ω For p Λ and σ Ω, implies that the measure W p,σ on Ω defined by W p,σ dx = p σexp trxσ µ p dx 3 is a probability distribution It is called the Wishart distribution with shape parameter p and scale parameter σ Its Laplace transform exists for θ σ Ω and is equal to Ω exptrθxw p,σ dx = p e σθ 4 When p > r, the Wishart distribution W p,σ is given by W p,σ dx = p σ Γ Ω p exp trxσ p r+ xω xdx 5 Further details on the Wishart distribution on Ω may be obtained from Casalis and Letac 996 or from Letac and Massam 998 It is however important to mention that the Wishart probability distribution has initially been defined as a multivariate extension of the chi-square distribution In fact, taking U,,U p random vectors in IR r with distribution N,σ, the distribution of the matrix X = p i= Ut i U i is a Wishart matrix This kind of Wishart probability distributions may be defined using the Gaussian random matrices Let u = t u,,u r be a random vector with distribution N r,σ Suppose that we have a random sample of u ie, a sequence of independent and identically distributed random variables with distribution equal to the distribution of u, and consider the r s Gaussian matrix U = u, u,s u r, u r,s
3 The distribution of U is N r,s,σ equal to where u,σ u = trσ u t u We have π rs σ s e u,σ u, Proposition If U N r,s,σ, then the matrix X = U t U has the Wishart probability distribution W r s, σ Proof The Laplace transform of X estimated in θ σ Ω is given by Ee θ,x u = π rs σ s IR rs e θ,ut u,σ u du = π rs σ s e u,σ θu du rs IR σ θ = s s σ = s σ θ s σ It is then the Laplace transform of a W r s, σ distribution This distribution is absolutely continuous with respect to the Lebesgue measure if and only if s > r, that is if and only if the size s of the sample is greater or equal to the dimension r of the space of observations If we consider p independent random matrices U,,U p with distribution N r,s,σ, then the distribution of the matrix X = p i= U i t U i W r ps, σ Of course the class of Wishart distributions defined by mean of the Gaussian vectors or the Gaussian matrices doesn t cover all the Wishart distributions, however the fact that this random matrix is expressed in terms of Gaussian random vectors allows to establish many important properties of this subclass of the class of Wishart probability distribution Using a theorem due to Gindikin which relayes on the notion of generalized power, Hassairi and Lajmi [4] have introduced an important generalization of the Wishart distribution that they have called Riesz distribution This distribution in its general form is defined by its Laplace transform We will show that some among the Riesz distributions may be represented by the Gaussian distribution using samples of Gaussian vectors with missing data Riesz distributions For i,j r, we define the matrix µ ij = γ kl k,l r such that γ ij = and the other entries are equal to We also set c i = µ ii for i r, and e ij = µ ij +µ ji, for i < j r 3
4 With these notations, an element x of V may be written r x = i c i i=x + x ij e ij i<j In particular, for k r, we set e k = c + +c k Now consider the map P k from V into V defined by r x = i c i i=x + x ij e ij P k x = i<j k x i c i + x ij e ij i= i<j k Then the determinant k P k x of the k k bloc P k x is denoted by k x, it is the principal minor of x of order k The generalized power of an element x of Ω is then defined for s = s,,s r IR r, by s x = x s s x s s3 r x sr Note that s x = p x if s = p,,p with p IR It is also easy to see that s+s x = s x s x In particular, if m IR and s+m = s +m,,s r +m, we have s+m x = s x m x Now, we introduce the set Ξ of elements s of IR r defined as follows: For a real u, we put εu = if u = and εu = if u > For u,u,,u r, we define s = u and s k = u k + εu + + εu k, for k r We have the following result due to Gindikin [3], it is also proved in the monograph by Faraut and Korányi [] Theorem There exists a positive measure R s on V such that the Laplace transform L Rs is defined on Ω and is equal to s θ if and only if s is in Ξ The measures R s, defined in the previous theorem in terms of their Laplace transforms are divided into two classes according to the position of s in Ξ In the first class, the measures are absolutely continuous with respect to the Lebesgue measure on Ω More precisely, we have see [4] that when s = s,,s r in Ξ is such that for all i, s i > i, where Γ Ω s = π n r r R s dx = Γ Ω s s r+ x Ω xdx Γs j j j= The second class corresponds to s in Ξ \ r i= ] i,+ [ In this case the Riesz measure R s is concentrated on the boundary Ω of Ω, it has a complicated form which is explicitly described in [5] For s = s,,s r in Ξ and σ in Ω, we define the Riesz distribution R r s,σ by R r s,σdx = e σ,x s σ R sdx 4
5 The Laplace transform of the Riesz distribution is defined for θ in σ Ω by L Rrs,σθ = sσ θ s σ 6 If s = p,,p such that p Λ = {, r,, } ]r,+ [, then R rs,σ is nothing but the Wishart distribution with shape parameter p and scale parameter σ The definition of the absolutely continuous Riesz distribution on the cone of positive definite symmetric matrices Ω relies on the notion of generalized power It is defined for σ in Ω and s = s,,s r IR r such that s i > i by 3 Main result Rs, σdx = Γ Ω s s σ e σ,x s r+x Ω xdx We now show that some of the Riesz distributions have a representation in terms of the Gaussian distribution generalizing what accours in the case of the Wishart distribution For simplicity, we take σ equal to the identity matrix Suppose that we have s r independent observations of the vector u,,u r with Gaussian distribution N r,i r, and that some data in the observations are missing The sample is then partitioned according to the number of available components in the observation More precisely, we suppose that there exist integers < s s s r such that we have s observationsonu,,u r whereallthecomponentsoftheobservationareavailable, s s observations on u,,u r, the first component of the observation is missing, s i+ s i observations on u i+,,u r, the i first components of the observation are missing, i r The number s i is in particular equal to the number of times the component u i appears in the data Define p = sup{i ;s i = s }, p = sup{i > p ;s i = s p + }, p l+ = sup{i > p l ;s i = s pl +}, p k = inf{i;s i = s r }, so that we have s = = s p, s p s p +, s p + = = s p, s p s p + s pl + = = s pl+, s pk s r 5
6 s pk + = = s r We have that p k+ = r Replacing the missing components by the theoretical mean that is by zero, we obtain the matrix U = u, u,sp u p, u p,s p u p +, u p +,s p u p +,s p u p, u p,s p u p,s p u pk +, u pk +,s r u r, u r,sr This matrix U consists of blocks defined recursively in the following way : 37 with U l+ = U l U l+ = u i,j pl + i p l+, j s pl U l+ = u i,j pl + i p l+, s pl + j s pl+ We have that U k+ = U U = u i,j i p j s p l+ U l+ U = U l+ U l+, l k 38 Theorem 3 The distribution of X = U t U is Rs, Ir with s = s,, sr Proof We use the following Peirse decomposition of an r r matrix ω For l k, we set ω l = ω i,j i pl j p l, and we write ω l+ as ω l+ = ω l t ω l+ ω l+ ω l+ From Proposition, we have that U t U has the W p sp other hand, since s = = s p, we have that sp W p, I p s = R p,, s p 6 39, Ip, I p distribution On the
7 It suffises to show that if U l t U l is R s pl,, sp l, Ip l, then U l+ t U l+ is s R pl+,, sp l+, Ip l+ Again we use the Laplace transform For θ Ir Ω and with the decomposition 39, we have L U l+ t U l+θl+ = E exp θ l+,u l+ t U l+ = E exp tr θ l+ U l+ t U l+ E exp tr θ l U l t U l + t θ l+ U l+ t U l + U l t U l+ θ l+ +θ l+ U l+ t U l+ We have that U l+ is a p l+ p l s pl+ s pl Gaussian matrix with distribution N,I pl+ p l According to Proposition, the matrix U l+ W r sp l+ sp l, I p l+ p l Hence E exp tr θ l+ U l+ sp l+ sp l t U l+ = Concerning the second expectation, we have that tr t U l+ has the Wishart distribution sp l+ sp l I p l+ p l t θ l+ U l+ t U l +U l t U l+ θ l+ +θ l+ U l+ = U l+, θ l+ U l + U l+, θ l+ U l+ I pl+ p l t U l+ As U l+ is a p l+ p l s pl matrix with distribution N,I pl+ p l, we obtain that E exp sp l = exp U l+, θ l+ U l + U l+, θ l+ I θl+ pl+ pl θ l+ U l, I pl+ p l U l+ U l θ l+ U l Now multiplying this by exp tr θ l U l t U l, weneedthentocalculate theexpectation E exp θ l + t θ l+ Ipl+ p l θ l+,u l t U l As from the hypothesis, U l t U l is R s pl,, sp l, Ip l, using 6 this is equal to s,, sp l I pl θ l + t θ l+ Ipl+ p l s,, sp l I pl 7 θ l+
8 Finally, we obtain that is L U l+ t U l+θl+ = Ip sp l l+ p l sp l s,, sp l I pl+ p l L U l+ t U l+θl+ = s,, sp l But s pl + = = s pl+, then sp l+ sp l+ sp l Ip sp l+ sp l l+ p l I pl+ p l I pl θl t θ l+ Ipl+ p l s,, sp l I pl Ip sp l+ l+ p l sp l+ I pl+ p l I pl θl t θ l+ Ipl+ p l s,, sp l I pl θ l+ θ l+ I pl+ p l = s pl +,, sp I l+ pl+ p l,, and sp l+ Ipl+ p l = s pl +,, sp l+ Ipl+ p l As we have s,, sp l I pl s pl +,, sp I l+ pl+ p l = s,, sp I l+ pl+, it follows that L U l+ t U l+θl+ = Ipl+ s,, sp l+ s,, sp I l+ pl+ Therefore U l+ t U l+ is a Riesz random matrix with distribution s R pl+,, s p l+, I p l+ 8
9 Given that X = U k+ t U k+, the result follows References [] Casalis, M and Letac, G, The Lukacs-Olkin-Rubin characterization of the Whishart distributions on the symmetric cones Ann Statist ,996 [] Faraut, J; Korányi, A, Analysis on Symmetric Cones Oxford Univ Press 994 [3] Gindikin, SG, Analysis on homogenous domains Russian Math, Surveys 9, -89, 964 [4] Hassairi, A, Lajmi, S, Riesz exponential families on symmetric cones J Theoret Probab 4, [5] Hassairi, A, Lajmi, S, Singular Riesz measures on symmetric cones Adv Oper Theory 3, no, -5, 8 [6] Letac, G and Massam, H, Quadratic and inverse regressions for Wishart distributions Ann Statist 6, , 998 9
PIOTR GRACZYK LAREMA, UNIVERSITÉ D ANGERS, FRANCE. Analysis of Wishart matrices: Riesz and Wishart laws on graphical cones. Letac-Massam conjecture
PIOTR GRACZYK LAREMA, UNIVERSITÉ D ANGERS, FRANCE Analysis of Wishart matrices: Riesz and Wishart laws on graphical cones Letac-Massam conjecture Joint work with H. Ishi, S. Mamane, H. Ochiai Mathematical
More informationTesting Equality of Natural Parameters for Generalized Riesz Distributions
Testing Equality of Natural Parameters for Generalized Riesz Distributions Jesse Crawford Department of Mathematics Tarleton State University jcrawford@tarleton.edu faculty.tarleton.edu/crawford April
More informationPIOTR GRACZYK LAREMA, UNIVERSITÉ D ANGERS, FRANCJA. Estymatory kowariancji dla modeli graficznych
PIOTR GRACZYK LAREMA, UNIVERSITÉ D ANGERS, FRANCJA Estymatory kowariancji dla modeli graficznych wspó lautorzy: H. Ishi(Nagoya), B. Ko lodziejek(warszawa), S. Mamane(Johannesburg), H. Ochiai(Fukuoka) XLV
More informationOrthogonal decompositions in growth curve models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationOn matrix variate Dirichlet vectors
On matrix variate Dirichlet vectors Konstencia Bobecka Polytechnica, Warsaw, Poland Richard Emilion MAPMO, Université d Orléans, 45100 Orléans, France Jacek Wesolowski Polytechnica, Warsaw, Poland Abstract
More informationMultivariate Gaussian Analysis
BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density
More informationFree Meixner distributions and random matrices
Free Meixner distributions and random matrices Michael Anshelevich July 13, 2006 Some common distributions first... 1 Gaussian Negative binomial Gamma Pascal chi-square geometric exponential 1 2πt e x2
More informationRegression #5: Confidence Intervals and Hypothesis Testing (Part 1)
Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose
More informationi=1 α i. Given an m-times continuously
1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationSTATS306B STATS306B. Discriminant Analysis. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010
STATS306B Discriminant Analysis Jonathan Taylor Department of Statistics Stanford University June 3, 2010 Spring 2010 Classification Given K classes in R p, represented as densities f i (x), 1 i K classify
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationKernel families of probability measures. Saskatoon, October 21, 2011
Kernel families of probability measures Saskatoon, October 21, 2011 Abstract The talk will compare two families of probability measures: exponential, and Cauchy-Stjelties families. The exponential families
More informationarxiv: v1 [math.st] 4 Oct 2018
arxiv:8.36v [math.st] 4 Oct 8 Gaussian approximation of Gaussian scale mixtures Gérard Letac, Hélène Massam May 6th, 8 Abstract For a given positive random variable V > and a given Z N(,) independentofv,
More informationThe tiling by the minimal separators of a junction tree and applications to graphical models Durham, 2008
The tiling by the minimal separators of a junction tree and applications to graphical models Durham, 2008 G. Letac, Université Paul Sabatier, Toulouse. Joint work with H. Massam 1 Motivation : the Wishart
More informationMeixner matrix ensembles
Meixner matrix ensembles W lodek Bryc 1 Cincinnati April 12, 2011 1 Based on joint work with Gerard Letac W lodek Bryc (Cincinnati) Meixner matrix ensembles April 12, 2011 1 / 29 Outline of talk Random
More informationA tutorial on non central Wishart distributions.
A tutorial on non central Wishart distributions. G. Letac, H. Massam October 8, 004 Contents The non central χ and the non central gamma Noncentral Wishart and Gaussian laws 4 3 Density of the standard
More informationarxiv: v1 [math.st] 19 Apr 2010
Why Jordan algebras are natural in statistics: quadratic regression implies Wishart distributions G. Letac, J. Weso lowski November 7, 018 arxiv:1004.3148v1 [math.st] 19 Apr 010 Abstract If the space Q
More informationRandom matrices: Distribution of the least singular value (via Property Testing)
Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued
More informationAn Algebraic and Geometric Perspective on Exponential Families
An Algebraic and Geometric Perspective on Exponential Families Caroline Uhler (IST Austria) Based on two papers: with Mateusz Micha lek, Bernd Sturmfels, and Piotr Zwiernik, and with Liam Solus and Ruriko
More informationEigenvalues and Eigenvectors
Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying
More informationarxiv: v3 [math.ra] 22 Aug 2014
arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationA CLASS OF ORTHOGONALLY INVARIANT MINIMAX ESTIMATORS FOR NORMAL COVARIANCE MATRICES PARAMETRIZED BY SIMPLE JORDAN ALGEBARAS OF DEGREE 2
Journal of Statistical Studies ISSN 10-4734 A CLASS OF ORTHOGONALLY INVARIANT MINIMAX ESTIMATORS FOR NORMAL COVARIANCE MATRICES PARAMETRIZED BY SIMPLE JORDAN ALGEBARAS OF DEGREE Yoshihiko Konno Faculty
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationMultivariate Linear Models
Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements
More informationParameter estimation in linear Gaussian covariance models
Parameter estimation in linear Gaussian covariance models Caroline Uhler (IST Austria) Joint work with Piotr Zwiernik (UC Berkeley) and Donald Richards (Penn State University) Big Data Reunion Workshop
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationTwo remarks on normality preserving Borel automorphisms of R n
Proc. Indian Acad. Sci. (Math. Sci.) Vol. 3, No., February 3, pp. 75 84. c Indian Academy of Sciences Two remarks on normality preserving Borel automorphisms of R n K R PARTHASARATHY Theoretical Statistics
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationZonal Polynomials and Hypergeometric Functions of Matrix Argument. Zonal Polynomials and Hypergeometric Functions of Matrix Argument p.
Zonal Polynomials and Hypergeometric Functions of Matrix Argument Zonal Polynomials and Hypergeometric Functions of Matrix Argument p. 1/2 Zonal Polynomials and Hypergeometric Functions of Matrix Argument
More informationŽ. Ž. Ž. 2 QUADRATIC AND INVERSE REGRESSIONS FOR WISHART DISTRIBUTIONS 1
The Annals of Statistics 1998, Vol. 6, No., 573595 QUADRATIC AND INVERSE REGRESSIONS FOR WISHART DISTRIBUTIONS 1 BY GERARD LETAC AND HELENE ` MASSAM Universite Paul Sabatier and York University If U and
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationNext is material on matrix rank. Please see the handout
B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0
More informationDecomposable and Directed Graphical Gaussian Models
Decomposable Decomposable and Directed Graphical Gaussian Models Graphical Models and Inference, Lecture 13, Michaelmas Term 2009 November 26, 2009 Decomposable Definition Basic properties Wishart density
More informationLecture I: Asymptotics for large GUE random matrices
Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random
More informationMathematics. EC / EE / IN / ME / CE. for
Mathematics for EC / EE / IN / ME / CE By www.thegateacademy.com Syllabus Syllabus for Mathematics Linear Algebra: Matrix Algebra, Systems of Linear Equations, Eigenvalues and Eigenvectors. Probability
More informationLecture on: Numerical sparse linear algebra and interpolation spaces. June 3, 2014
Lecture on: Numerical sparse linear algebra and interpolation spaces June 3, 2014 Finite dimensional Hilbert spaces and IR N 2 / 38 (, ) : H H IR scalar product and u H = (u, u) u H norm. Finite dimensional
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationLet X and Y denote two random variables. The joint distribution of these random
EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.
More informationLog Covariance Matrix Estimation
Log Covariance Matrix Estimation Xinwei Deng Department of Statistics University of Wisconsin-Madison Joint work with Kam-Wah Tsui (Univ. of Wisconsin-Madsion) 1 Outline Background and Motivation The Proposed
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationarxiv: v1 [math.ra] 25 Nov 2017
arxiv:709289v [mathra] 25 Nov 207 LEFT IDEALS IN MATRIX RINGS OVER FINITE FIELDS R A FERRAZ, C POLCINO MILIES, AND E TAUFER Abstract It is well-known that each left ideals in a matrix rings over a finite
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2
Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate
More informationConditional Simulation of Random Fields by Successive Residuals 1
Mathematical Geology, Vol 34, No 5, July 2002 ( C 2002) Conditional Simulation of Random Fields by Successive Residuals 1 J A Vargas-Guzmán 2,3 and R Dimitrakopoulos 2 This paper presents a new approach
More informationCauchy-Stieltjes kernel families
Cauchy-Stieltjes kernel families Wlodek Bryc Fields Institute, July 23, 2013 NEF versus CSK families The talk will switch between two examples of kernel families K(µ) = {P θ (dx) : θ Θ} NEF versus CSK
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationON THE MATRIX EQUATION XA AX = X P
ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized
More informationLecture 4 Orthonormal vectors and QR factorization
Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced
More informationSMITH MCMILLAN FORMS
Appendix B SMITH MCMILLAN FORMS B. Introduction Smith McMillan forms correspond to the underlying structures of natural MIMO transfer-function matrices. The key ideas are summarized below. B.2 Polynomial
More informationMeasures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT)
Measures and Jacobians of Singular Random Matrices José A. Díaz-Garcia Comunicación de CIMAT No. I-07-12/21.08.2007 (PE/CIMAT) Measures and Jacobians of singular random matrices José A. Díaz-García Universidad
More informationSTRUCTURE AND DECOMPOSITIONS OF THE LINEAR SPAN OF GENERALIZED STOCHASTIC MATRICES
Communications on Stochastic Analysis Vol 9, No 2 (2015) 239-250 Serials Publications wwwserialspublicationscom STRUCTURE AND DECOMPOSITIONS OF THE LINEAR SPAN OF GENERALIZED STOCHASTIC MATRICES ANDREAS
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationMath 4377/6308 Advanced Linear Algebra
1.3 Subspaces Math 4377/6308 Advanced Linear Algebra 1.3 Subspaces Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math4377 Jiwen He, University of Houston
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationCMPE 58K Bayesian Statistics and Machine Learning Lecture 5
CMPE 58K Bayesian Statistics and Machine Learning Lecture 5 Multivariate distributions: Gaussian, Bernoulli, Probability tables Department of Computer Engineering, Boğaziçi University, Istanbul, Turkey
More informationStable Process. 2. Multivariate Stable Distributions. July, 2006
Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.
More informationarxiv: v1 [math.pr] 11 Feb 2019
A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm arxiv:190.03736v1 math.pr] 11 Feb 019 Chi Jin University of California, Berkeley chijin@cs.berkeley.edu Rong Ge Duke
More informationPenalized least squares versus generalized least squares representations of linear mixed models
Penalized least squares versus generalized least squares representations of linear mixed models Douglas Bates Department of Statistics University of Wisconsin Madison April 6, 2017 Abstract The methods
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationUNCLASSIFIED Reptoduced. if ike ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA UNCLASSIFIED
. UNCLASSIFIED. 273207 Reptoduced if ike ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA UNCLASSIFIED NOTICE: When government or other drawings, specifications
More informationRanks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices
Operator Theory: Advances and Applications, Vol 1, 1 13 c 27 Birkhäuser Verlag Basel/Switzerland Ranks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices Tom Bella, Vadim
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More information17. Image Measure. j =1,...,n, Me j. m ij e i. where (e 1,...,e n ) is the canonical basis of K n
Tutorial 17: Image Measure 1 17. Image Measure In the following, K denotes R or C. We denote M n (K), n 1, the set of all n n-matrices with K-valued entries. We recall that for all M =(m ij ) M n (K),
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationExact and high order discretization schemes. Wishart processes and their affine extensions
for Wishart processes and their affine extensions CERMICS, Ecole des Ponts Paris Tech - PRES Université Paris Est Modeling and Managing Financial Risks -2011 Plan 1 Motivation and notations 2 Splitting
More informationL. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1
L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationConvex Optimization & Parsimony of L p-balls representation
Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016 Motivation Unit balls associated with nonnegative homogeneous polynomials
More informationGATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS
SAMPLE STUDY MATERIAL Postal Correspondence Course GATE Engineering Mathematics GATE ENGINEERING MATHEMATICS ENGINEERING MATHEMATICS GATE Syllabus CIVIL ENGINEERING CE CHEMICAL ENGINEERING CH MECHANICAL
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationMATH 320, WEEK 7: Matrices, Matrix Operations
MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationLecture Note 1: Probability Theory and Statistics
Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would
More informationREGRESSION TREE CREDIBILITY MODEL
LIQUN DIAO AND CHENGGUO WENG Department of Statistics and Actuarial Science, University of Waterloo Advances in Predictive Analytics Conference, Waterloo, Ontario Dec 1, 2017 Overview Statistical }{{ Method
More informationNext tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2
Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column
More informationStatistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach
Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score
More information1 Matrices and Systems of Linear Equations
March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationClass Meeting # 1: Introduction to PDEs
MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Spring 2017 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u =
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationLecture 20: Linear model, the LSE, and UMVUE
Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;
More informationLECTURE 5: THE METHOD OF STATIONARY PHASE
LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationCHARACTERIZATIONS OF PSEUDODIFFERENTIAL OPERATORS ON THE CIRCLE
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 5, May 1997, Pages 1407 1412 S 0002-9939(97)04016-1 CHARACTERIZATIONS OF PSEUDODIFFERENTIAL OPERATORS ON THE CIRCLE SEVERINO T. MELO
More informationGroup Representations
Group Representations Alex Alemi November 5, 2012 Group Theory You ve been using it this whole time. Things I hope to cover And Introduction to Groups Representation theory Crystallagraphic Groups Continuous
More informationMATH 126 FINAL EXAM. Name:
MATH 126 FINAL EXAM Name: Exam policies: Closed book, closed notes, no external resources, individual work. Please write your name on the exam and on each page you detach. Unless stated otherwise, you
More informationTEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX.
Serdica Math J 34 (008, 509 530 TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX Evelina Veleva Communicated by N Yanev Abstract
More informationEquivalence constants for certain matrix norms II
Linear Algebra and its Applications 420 (2007) 388 399 www.elsevier.com/locate/laa Equivalence constants for certain matrix norms II Bao Qi Feng a,, Andrew Tonge b a Department of Mathematical Sciences,
More information