Stat 700 HW2 Solutions, 9/25/09
|
|
- April Cunningham
- 5 years ago
- Views:
Transcription
1 Stat 700 HW2 Solutions, 9/25/09 (1). By the spectral theorem, B = k λ j v j v j, where v j are an orthonormal basis of eigenvectors of B with corresponding eigenvalues λ j. Now, since λ j v j = Bv j = B 2 v j = B(λ j v j ) = λ 2 j v j and λ j 0, it follows that λ j = 0 or 1 for all j = 1,..., k. Moreover, r = rank(b) is the number of nonzero eigenvalues λ j, and after renumbering if necessary, there is no loss in generality in assuming λ 1 = = λ r = 1, while λ r+1 = = λ k = 0. Now let Z be a k-vector defined by Z j = v jw for j = 1,..., k, and observe that Z is a linear transformation of W and therefore multivariate normal, but var(z j ) = v jbv j = 0 for j > r, and cov(z i, Z j ) = v ibv j = δ ij for i, j = 1,..., k. Thus, k r r W W = W v j v jw = W v j v jw = Zj 2 χ 2 r since we have seen that Z j, 1 j r, are iid N (0, 1). (2). As in class, Z A 1/2 Y N (0, I n n ), so that R 2 = Z Z χ 2 n, so by univariate change-of-variable, f R (r) = 2 (n 2)/2 r n 1 /2 Γ(n/2) e r2 r > 0 Now it is easy to see that Z/R = Z/(Z Z) 1/2 is independent of R and uniformly distributed on the surface of the unit n-sphere {z R n : z z = 1}, since for any orthogonal matrix U, (Z/R, R) has the same distribution as (UZ/(Z U UZ) 1/2, (Z U UZ) 1/2 ) = (UZ/R, R). This argument can be used to give a rigorous proof both of independence and uniformity of Z/R on the surface of the unit sphere, because orthogonal matrices are transitive on the surface of the unit sphere, i.e. one can be found to carry any unit vector v into any other fixed vector w. For a complete proof in this problem, we need to find a n 1-dimensional coordinate system for the unit sphere so that there will be a density and so that we can use the Jacobian change-of-variable formula and also give a density for the image A 1/2 Z/R = Y/R of a uniform random unit-vector in R n under the linear operator A. Define: 1
2 ω j z j = arccos( zj ), 1 j n 2 z2 n { arccos(zn 1 / zn zn) 2 if z n > 0 ω n 1 = 2π arccos(z n 1 / zn zn) 2 if z n 0 Then the inverse mapping from the spherical coordinates (r, ω 1,..., ω n 1 ) back to z is given by z j = { r j 1 i=1 sin(ω i) cos(ω j ) if 1 j n 1 r n i=1 sin(ω i) if j = n It is easy to see that the Jacobian matrix for this inverse mapping involves r not at all in the first row and as a multiplicative factor in each of the other rows, so that the corresponsing absolute Jacobian determinant is r n 1 K(ω) for a function K of ω = (ω 1,..., ω n 1 ). It follows immediately by the change-of-variable formula for densities that f R,ω (r, ω) = C r n 1 e r2 /2 K(ω) which immediately provides the independence of R and ω and (after adjustment of constants) the separate densities of R and ω. (3). Bickel-Doksum, # (a). This one is obviously not identifiable, because the multivariate normal data with mean-vector (µ + α 1,..., µ + α p ) and variance = σ 2 I p p is left unchanged if µ is replaced by µ + c and at the same time all α j are replaced by α j c, where c is any real number. (b). Now we have (only) to show that with α restricted so that p α j = 0, the mapping from ϑ to (ξ 1,..., ξ p+1 ) (µ + α 1,..., µ + α p, σ 2 ) is 1-to-1. But if (µ + α 1,..., µ + α p are given, then under this side-condition it follows that the sum of these p quantities is pµ. Thus, when ϑ satisfies the side-condition, the proof of 1-to-1 is completed by the formulas: µ = (ξ ξ p )/p ξ, α j = ξ j ξ, σ 2 = ξ p+1 (c). Since we observe only Y X N (µ 1 µ 2, σ 2 ), there is no way to tell ϑ = (µ 1 + c, µ 2 + c) from ϑ = (µ 1, µ 2 ) even if σ 2 is known. (d). This one also is nonidentifiable, since the parameters θ and θ = 2
3 (α + c1 p, λ c1 b, ν, σ 2 ) give rise to exactly the same data-distribution. (e). But here the parameter is identifiable, since p b b p ν = E( X ij ), ν + α i = E( X ij ), ν + λ j = E( X ij ) i=1 (4). Bickel-Doksum, # (a) Unif(0, ϑ) cannot be regular, since the set where the density is positive is (0, ϑ). (b) The set where the probability mass function is positive is {1, 2,..., ϑ}, which depends on ϑ, and therefore the model is not regular. (c) This model, called truncated normal, is not regular because it is of mixed type, neither continuous nor discrete. (d) The shifted discrete values fall in the set {.1 + ϑ,.2 + ϑ,...,.9 + ϑ} where the probability mass function is positive; again not regular. (5). Bickel-Doksum, # (a) The model is X N (µ, σ 2 ) F, Y 2µ+ X N (2µ+ µ, σ 2 ) = N (µ+, σ 2 ) G( ) F ( ). (b) If F ( ) F 0 ( µ), with F 0 continuous but not normal, and we define Y = 2µ+ X, then P (Y y) = P (X 2µ+ y) = 1 F 0 ( (y µ)) which implies that if F 0 is symmetric about 0 (so that X is symmetric about its mean µ), and G = F Y = F X ( ). P (Y µ z) = 1 F 0 ( z) = F 0 (z ) (c) Assume that Y = X + δ(x) for some function δ( ) which is continuous and strictly increasing, and also that G(y) = F Y (y) = F X (y ). Let H( ) be defined as the unique inverse of the function x + δ(x), i.e., let x = H(z) be the solution to x + δ(x) = z. Then H is strictly increasing, and F (y ) = G(y) = P (X + δ(x) y) = P (X H(y)) = F (H(y)) implies that H(y) y for all points y which are points of increase of F (defined as points u such that for all ɛ > 0, F (u + ɛ) > F (u ɛ)), i.e., for all such y, y + δ(y ) = y, or δ(y ) =. We conclude δ( ) if F has no flat places, i.e., if every point is a point of increase. (6). Bickel-Doksum, # Here the parameters are the probability vectors (p(j), 0 j N) and (r(j), 0 j N), and the probability law immediately determines the family of joint probability masses π 1 (j) = P (T = j C) = P (Y = j, T C) 3 i=1
4 π 0 (j) = P (C = j T ) = P (Y = j, C < T ) where Y = min(t, C), and note also that P (Y = j) = π 1 (j) + π 0 (j). Our job is to show that these functions π i, π 0 uniquely determine the vectors p, r. Note first that P (Y j) = P (T j, C j) = p(k) r(k) P (Y = j, T C) = P (T = j, C = k) = p(j) Putting these facts together gives r(k) P (Y = j, T C Y j) = P (Y = j, T C) P (Y j) = p(j) N r(k) N p(k) N r(k) / N = p(j) p(k) Thus the functions π 1, π 0 uniquely determine G(j) = p(j)/ N p(k) from which we find p(0), p(1),..., p(n) by induction. From this and the formula for P (Y j) we find also (for all j) that N r(j) is uniquely determined, and therefore r(j) is also. This completes the proof of identifiability. (7). Bickel-Doksum, # The posterior probability mass function for ϑ given X = k is, for each ϑ = j/4, j = 1, 2, 3, (1/3) ϑ (1 ϑ) k 3 (1/3) (j/4) (1 = ϑ (1 ϑ) k 3 j/4)k (j/4) (1 j/4)k which for k = 2 has the three values (9/20, 8/20, 3/20). (b) The most probable value for k = 2 is ϑ = 1/4. For general k, the most probable value is 3/4 if k = 0, 1/2 if k = 1, and 1/4 if k 2. (c) When the prior is β(r, s), the posterior (now a continuous density) is easily checked to be proportional to t r 1 (1 t) s 1 t (1 t) k, i.e., is β(r + 1, s + k). 4
5 (8) Optional Problem. > v1 = c(1,-1,0)/sqrt(2) > v2 = c(1,1,1)/sqrt(3) orthonormal pair of vectors > sigma = 8*outer(v1,v1)+3*outer(v2,v2) [,1] [,2] [,3] [1,] [2,] [3,] So Sigma^(1/2) = > M = sqrt(8)*outer(v1,v1)+sqrt(3)*outer(v2,v2) [,1] [,2] [,3] [1,] [2,] [3,] The hyperplane of values of Y is the set of vectors t(c((1,2,3)) + column space of R3 = (1,2,3) + a*(1,-1,0) + b*(1,1,1) a,b any real values = (1+a+b, 2-a+b, 3+b) and we want to know (with Z1s = W1 and Z2s=W2 indep N(0,1)) P((1,2,3) + (1,-1,0)*W1*sqrt(8/2) + (1,1,1)*W2*sqrt(3/3) > 0) Inequalities are: 1+2*W1+W2 > 0, 2-2*W1+W2 > 0, 3+W2 > 0 and the last one is implies by the first two, so the probability is P(-1-W2 < 2*W1 < 2+W2 ) = Numerical bivariate ntegral > integrate(function(w) dnorm(w)*(pnorm(1+w/2)-pnorm(-(1+w)/2)), -1.5,Inf)$val ### -1.5 because want 2+W2 > -1-W2 [1] > zmat = mvrnorm(10000,1:3,sigma) Check by simulation > dim(zmat) [1] > mean(apply(zmat>0,1,prod)) [1] ### OK!! 5
The Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationMS-E2112 Multivariate Statistical Analysis (5cr) Lecture 6: Bivariate Correspondence Analysis - part II
MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 6: Bivariate Correspondence Analysis - part II the Contents the the the Independence The independence between variables x and y can be tested using.
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationChapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix
Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When
More informationToday we will prove one result from probability that will be useful in several statistical tests. ... B1 B2 Br. Figure 23.1:
Lecture 23 23. Pearson s theorem. Today we will prove one result from probability that will be useful in several statistical tests. Let us consider r boxes B,..., B r as in figure 23.... B B2 Br Figure
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationElliptically Contoured Distributions
Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal
More informationLearning gradients: prescriptive models
Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan
More informationEXPLICIT MULTIVARIATE BOUNDS OF CHEBYSHEV TYPE
Annales Univ. Sci. Budapest., Sect. Comp. 42 2014) 109 125 EXPLICIT MULTIVARIATE BOUNDS OF CHEBYSHEV TYPE Villő Csiszár Budapest, Hungary) Tamás Fegyverneki Budapest, Hungary) Tamás F. Móri Budapest, Hungary)
More informationPrinciple Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA
Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In
More informationSTAT215: Solutions for Homework 2
STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator
More informationGibbs Sampling in Linear Models #2
Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationPHYSICS 234 HOMEWORK 2 SOLUTIONS. So the eigenvalues are 1, 2, 4. To find the eigenvector for ω = 1 we have b c.
PHYSICS 34 HOMEWORK SOUTIONS.8.. The matrix we have to diagonalize is Ω 3 ( 4 So the characteristic equation is ( ( ω(( ω(4 ω. So the eigenvalues are,, 4. To find the eigenvector for ω we have 3 a (3 b
More informationLarge sample distribution for fully functional periodicity tests
Large sample distribution for fully functional periodicity tests Siegfried Hörmann Institute for Statistics Graz University of Technology Based on joint work with Piotr Kokoszka (Colorado State) and Gilles
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationANOVA: Analysis of Variance - Part I
ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.
More informationINVARIANCE OF THE LAPLACE OPERATOR.
INVARIANCE OF THE LAPLACE OPERATOR. The goal of this handout is to give a coordinate-free proof of the invariance of the Laplace operator under orthogonal transformations of R n (and to explain what this
More informationProbability Lecture III (August, 2006)
robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose
More informationLinear Methods for Prediction
Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we
More informationMath 180B, Winter Notes on covariance and the bivariate normal distribution
Math 180B Winter 015 Notes on covariance and the bivariate normal distribution 1 Covariance If and are random variables with finite variances then their covariance is the quantity 11 Cov := E[ µ ] where
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationRandom Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30
Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices
More informationRandom matrices: Distribution of the least singular value (via Property Testing)
Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued
More informationPrincipal Components Theory Notes
Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory
More informationReflections and Rotations in R 3
Reflections and Rotations in R 3 P. J. Ryan May 29, 21 Rotations as Compositions of Reflections Recall that the reflection in the hyperplane H through the origin in R n is given by f(x) = x 2 ξ, x ξ (1)
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationThe Wishart distribution Scaled Wishart. Wishart Priors. Patrick Breheny. March 28. Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/11
Wishart Priors Patrick Breheny March 28 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/11 Introduction When more than two coefficients vary, it becomes difficult to directly model each element
More informationThe Hodge Star Operator
The Hodge Star Operator Rich Schwartz April 22, 2015 1 Basic Definitions We ll start out by defining the Hodge star operator as a map from k (R n ) to n k (R n ). Here k (R n ) denotes the vector space
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationTotal Least Squares Approach in Regression Methods
WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics
More informationECE534, Spring 2018: Solutions for Problem Set #5
ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described
More informationFinal Exam Practice Problems Answers Math 24 Winter 2012
Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More information22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices
m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationOn prediction and density estimation Peter McCullagh University of Chicago December 2004
On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationHomework 11 Solutions. Math 110, Fall 2013.
Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting
More informationPattern correlation matrices and their properties
Linear Algebra and its Applications 327 (2001) 105 114 www.elsevier.com/locate/laa Pattern correlation matrices and their properties Andrew L. Rukhin Department of Mathematics and Statistics, University
More informationVisualizing the Multivariate Normal, Lecture 9
Visualizing the Multivariate Normal, Lecture 9 Rebecca C. Steorts September 15, 2015 Last class Class was done on the board get notes if you missed lecture. Make sure to go through the Markdown example
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More information5 Linear Algebra and Inverse Problem
5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationPCA Review. CS 510 February 25 th, 2013
PCA Review CS 510 February 25 th, 2013 Recall the goal: image matching Probe image, registered to gallery Registered Gallery of Images 3/7/13 CS 510, Image Computa5on, Ross Beveridge & Bruce Draper 2 Getting
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationConsidering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.
Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and
More informationDifferential Kinematics
Differential Kinematics Relations between motion (velocity) in joint space and motion (linear/angular velocity) in task space (e.g., Cartesian space) Instantaneous velocity mappings can be obtained through
More informationAnnouncements (repeat) Principal Components Analysis
4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long
More informationj=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).
Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationCollocation based high dimensional model representation for stochastic partial differential equations
Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More informationModel Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao
Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationDiscrete solid-on-solid models
Discrete solid-on-solid models University of Alberta 2018 COSy, University of Manitoba - June 7 Discrete processes, stochastic PDEs, deterministic PDEs Table: Deterministic PDEs Heat-diffusion equation
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationJordan Canonical Form
Jordan Canonical Form Massoud Malek Jordan normal form or Jordan canonical form (named in honor of Camille Jordan) shows that by changing the basis, a given square matrix M can be transformed into a certain
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationLinear Methods for Prediction
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
More informationRandom Methods for Linear Algebra
Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform
More informationAsymptotic efficiency of simple decisions for the compound decision problem
Asymptotic efficiency of simple decisions for the compound decision problem Eitan Greenshtein and Ya acov Ritov Department of Statistical Sciences Duke University Durham, NC 27708-0251, USA e-mail: eitan.greenshtein@gmail.com
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More information3.5 Efficiency factors
3.5. EFFICIENCY FACTORS 63 3.5 Efficiency factors For comparison we consider a complete-block design where the variance of each response is σ CBD. In such a design, Λ = rj Θ and k = t, so Equation (3.3)
More informationTest Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics
Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationSTAT 730 Chapter 5: Hypothesis Testing
STAT 730 Chapter 5: Hypothesis Testing Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 28 Likelihood ratio test def n: Data X depend on θ. The
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationSTAT 450: Statistical Theory. Distribution Theory. Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6.
STAT 45: Statistical Theory Distribution Theory Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6. Basic Problem: Start with assumptions about f or CDF of random vector X (X 1,..., X p
More informationModelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich
Modelling Dependence with Copulas and Applications to Risk Management Filip Lindskog, RiskLab, ETH Zürich 02-07-2000 Home page: http://www.math.ethz.ch/ lindskog E-mail: lindskog@math.ethz.ch RiskLab:
More informationPolynomial approximation of mutivariate aggregate claim amounts distribution
Polynomial approximation of mutivariate aggregate claim amounts distribution Applications to reinsurance P.O. Goffard Axa France - Mathematics Institute of Marseille I2M Aix-Marseille University 19 th
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationApproaches for Multiple Disease Mapping: MCAR and SANOVA
Approaches for Multiple Disease Mapping: MCAR and SANOVA Dipankar Bandyopadhyay Division of Biostatistics, University of Minnesota SPH April 22, 2015 1 Adapted from Sudipto Banerjee s notes SANOVA vs MCAR
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationHomework set 4 - Solutions
Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W
More informationOn Null 2-Type Submanifolds of the Pseudo Euclidean Space E 5 t
International Mathematical Forum, 3, 2008, no. 3, 609-622 On Null 2-Type Submanifolds of the Pseudo Euclidean Space E 5 t Güler Gürpınar Arsan, Elif Özkara Canfes and Uǧur Dursun Istanbul Technical University,
More informationEXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants
EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationFixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility
American Economic Review: Papers & Proceedings 2016, 106(5): 400 404 http://dx.doi.org/10.1257/aer.p20161082 Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility By Gary Chamberlain*
More informationSTAT 501 Assignment 1 Name Spring Written Assignment: Due Monday, January 22, in class. Please write your answers on this assignment
STAT 5 Assignment Name Spring Reading Assignment: Johnson and Wichern, Chapter, Sections.5 and.6, Chapter, and Chapter. Review matrix operations in Chapter and Supplement A. Examine the matrix properties
More informationSampling considerations for modal analysis with damping
Sampling considerations for modal analysis with damping Jae Young Park, a Michael B Wakin, b and Anna C Gilbert c a University of Michigan, 3 Beal Ave, Ann Arbor, MI 489 USA b Colorado School of Mines,
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationA Vector-Space Approach for Stochastic Finite Element Analysis
A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More information1 Adjacency matrix and eigenvalues
CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected
More information