SUPPLEMENT TO OPTIMAL RATES OF CONVERGENCE FOR SPARSE COVARIANCE MATRIX ESTIMATION. University of Pennsylvania and Yale University
|
|
- Gwendolyn Evans
- 5 years ago
- Views:
Transcription
1 Submitted to the Annals of Statistics SUPPLEMENT TO OPTIMAL RATES OF CONVERGENCE FOR SPARSE COVARIANCE MATRIX ESTIMATION By T. Tony Cai and Harrison H. Zhou University of Pennsylvania and Yale University In this supplement we prove the additional technical lemmas used in the proof of Lemma 6.. Proof of Lemma 8 (ii). Jensen s inequality yields that for any two densities q 0 and q with respect to a common dominating measure µ, 2 Å q q 0 q dµ 0 q 2 Ôq0 q Õ 2 Å q 2 q dµ dµ 0 dµ. q q Equation (40) implies Ẽ Ôγ,λ Õ TV 2 PÔ,0,γ,λ Õ, P Ô,,γ,λ Õ c 2 2, where T V ÔP, QÕ denotes the total variation distance between two distributions P and Q, which then yields (58) Ẽ Ôγ,λ Õ TV PÔ,0,γ,λ Õ, P Ô,,γ,λ Õ c2, due to the simple fact ÔAverage Øa i ÙÕ 2 Average a 2 i. Note that the total variation affinity ÐP QÐ TV ÔP,QÕ for any two probability distributions P and Q. Equation (58) immediately implies Ẽ Ôγ,λ Õ PÔ,0,γ,λ Õ P Ô,,γ,λ Õ c 2 0. Thus we have P,0 P, Ẽ Ôγ,λ Õ PÔ,0,γ,λ Õ P Ô,,γ,λ Õ c 2 0 following from Lemma 4. The research of Tony Cai was supported in part by NSF FRG Grant DMS The research of Harrison Zhou was supported in part by NSF Career Award DMS and NSF FRG Grant DMS q
2 2 T. TONY CAI AND HARRISON H. ZHOU 2. Proof of Lemma 0. Write «0 v Ôp Õ Σ Σ 0 v Ôp Õ T 0 Ôp Õ Ôp Õ 0 v Ôp Õ Σ 2 Σ 0 T v 0 Ôp Õ Ôp Õ Ôp Õ where v Ôp Õ Ôv j Õ 2jp satisfies v j 0 for 2 j p r and v j 0 or for p r j p with ÐvÐ 0 k, and v v Ôp Õ j satisfies a 2jp similar property. Without loss of generality we consider only a special case with v j v j, p r j p r k 0, otherwise, p r k J j p r 2k J 0, otherwise. It is easy to see that ² Jǫ 2 n,p, i j q ij ǫ ± 2 n,p, p r i p r k, and p r k J j p r 2k J 0, otherwise. It is clear that the rank of ÔΣ Σ 0 Õ ÔΣ 2 Σ 0 Õ is 2. A straightforward calculation shows that the characteristic polynomial det ÖλI p p ÔΣ Σ 0 Õ ÔΣ 2 Σ 0 Õ λ Jǫ 2 n,p 2λ p 2 which implies ÔΣ Σ 0 Õ ÔΣ 2 Σ 0 Õ has two identical nonzero eigenvalues Jǫ 2 n,p. Note that this special case corresponds to and I r Øj : p r j p r kù I c Øj : p r k J j p r 2k JÙ. Hence, I r I c Øj : p r k J j p r kù with CardÔI r I c Õ J. The general case can be reduced to the special case by matrix permutations.
3 SPARSE COVARIANCE MATRIX ESTIMATION 3 3. Proof of Lemma. Let (59) A ÖI ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ Σ 2 0 I ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ, and Note that R γ,λ λ,λ ½ R γ,λ,λ,λ ½ logdet ÔI AÕ. logdet I ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ Σ 2 0 I ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ (60) logdet ØÖI A ÖI ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ Ù logdet ÖI ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ logdet ÔI AÕ 2log Jǫ 2 n,p R γ,λ,λ,λ ½ where the last equation follows from Lemma 0. NowweestablishEquation(46).Itisimportanttoobservethatrank ÔAÕ 2 due to the simple structure of ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ. Let be an eigenvalue of A. It is easy to see that A Σ 2 0 I Σ0 Σ Σ 0 Σ 2 ß Ô Σ 0 Σ Σ 0 Σ 2 Õ 3 Å 2 «2 3 3 ß 3 Å (6) since Σ Σ 0 Σ Σ 0 2kǫ n,p ß3 and λ min ÔΣ 0 Õ I Σ 0 I Σ 0 2ß3 from Equation (22). Note that log Ô xõ 2 x, for x 6, which implies i.e., Note that and R γ,λ,λ,λ ½ 4 A, n exp 2 Rγ,λ,λ exp Ô2n AÕ.,λ ½ I Σ 0 I Σ 0 2kǫ n,p 3 ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ 3 3.
4 4 T. TONY CAI AND HARRISON H. ZHOU We can write «ô 2 Σ 2 0 I ÔI ÔI Σ 0 ÕÕ 2 I I ÔI Σ 0 Õ k I k ô (62) Ôm 2Õ ÔI Σ 0 Õ m ÔI Σ 0 Õ m0 where Define ô m0 Ôm 2Õ ÔI Σ 0 Õ m ô m0 Ôm 2Õ Å m 3. 3 (63) A ÔI Σ 0 Õ ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ then A ÖI ÔΣ 0 Σ Õ ÔΣ 0 Σ 2 Õ ô m0 Ôm 2Õ ÔI Σ 0 Õ m A 3 3 A 27 8 A 27 8 max ØA, A Ù 3 from Equations (59) and (62). It is then sufficient to show Å 27 (64) Ẽ Ôλ,λ Ẽ ½ ÕJ Ôγ,λ ÕÔλ,λ ½ Õ exp 2 nmax ØA, A Ù 3 2, where A depends on the values of λ,λ ½ and Ôγ,λ Õ. We dropped the indices λ, λ ½ and Ôγ,λ Õ from A to simplify the notations. Let A a ij i,j. Then A max mp j a mj and A max mp i a im. We will show that for every non-negative integer t and every absolute row sum we have «ô Å P a mj 2t ǫn,p kǫ 2 k 2 t n,p j and the same tail bound holds for every absolute column sum, which immediately implies Å P max ØA, A Ù 2t ǫ n,p kǫ 2 k 2 t n,p 2p.
5 SPARSE COVARIANCE MATRIX ESTIMATION 5 For each rowm, definee m Ø,2,...,rÙ Þ Ø,mÙ.Note thatforeach column of λ Em, if the column sum of λ Em is less than or equal to 2k 2, then the other two rows can still freely take values 0 or in this column, because the total sum will still not exceed 2k. Let n λem be the number of columns of λ Em with column sum at least 2k, and define p λem r n λem. Without loss of generality we assume that k 3. Since n λem Ô2k 2Õ r k, the total number of s in the upper triangular matrix by the construction of the parameter set, we thus have n λem r 3 4, which immediately implies p λem r n λem r 4 p 8. Recall that the distribution of Ôγ,λ Õ given Ôλ,λ ½ Õ is uniform over Θ Ôλ,λ ½ Õ. Thus from Lemma 0 we have «ô P a mj 2t ǫn,p kǫ 2 n,p λ E m j k t pλem k t p λem k k 2 for every non-negative integer t as shown in Equation (47), which immediately implies «ô Å P a mj 2t ǫn,p kǫ 2 k 2 t n,p for every t 2. j For any random variable X 0 and constant a 0 it is known that EX P ÔX xõdx P ÔX xõdx P ÔX xõdx x0 xa xa a P ÔX xõdx. t 2β β xa SettingX exp 27 2 nmax ØA, A Ù anda exp 27n where β and p n β as defined in Section, we have Å 27 Ẽ Ôλ,λ Ẽ ½ ÕJ Ôγ,λ ÕÔλ,λ ½ Õ exp 2 nmax ØA, A Ù Å 2β exp 27n β ǫ n,p kǫ 2 n,p Å 27 27nkǫ 3 n,p exp 2 n 2t ǫ n,p kǫ 2 n,p 2p Å 54β exp β c n,pǫn,p 3 q (65) exp t 2β β Å t 2β β ǫ n,p kǫ 2 n,p, k 2 log Ô2pÕ Ôt Õlog k 2 27n Ôt Õkǫ 3 n,p Å t dt dt.
6 6 T. TONY CAI AND HARRISON H. ZHOU É Í logp Note that ǫ n,p υ n, k 2 c n,pǫ q n,p, υ 2 β 54β and Mυ q 3 as defined in Section 3, and c n,p Mn q 2 ÔlogpÕ 3 q 2 from Equation (3). These facts imply Å Å 54β 54β (66) exp β c n,pǫn,p 3 q exp β υ2 Mυ q e 3 3 2, and also (67) 27nkǫ 3 n,p 27Mυ 3 q Å logp β β β Å Å 2β logp β β log k 2. The last two equations yield exp t 2β β log Ô2pÕ Ôt Õlog k 2 27n Ôt Õkǫ 3 n,p dt o ÔÕ since logp. Then Equation (65) is bounded from above by 3 2, which immediately implies (64) and thus establishes Lemma. Remark Under the assumption c n,p Mn q 2 ÔlogpÕ 3 q 2 we obtain a finite upper bound in Equations (66) and (67). It is not clear to us if this assumption can be weakened to c n,p Mn q 2 ÔlogpÕ q 2. Department of Statistics The Wharton School University of Pennsylvania Philadelphia, PA tcai@wharton.upenn.edu URL: tcai Department of Statistics Yale University New Haven, CT huibin.zhou@yale.edu URL: hz68
DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING. By T. Tony Cai and Linjun Zhang University of Pennsylvania
Submitted to the Annals of Statistics DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING By T. Tony Cai and Linjun Zhang University of Pennsylvania We would like to congratulate the
More informationOptimal rates of convergence for estimating Toeplitz covariance matrices
Probab. Theory Relat. Fields 2013) 156:101 143 DOI 10.1007/s00440-012-0422-7 Optimal rates of convergence for estimating Toeplitz covariance matrices T. Tony Cai Zhao Ren Harrison H. Zhou Received: 17
More informationUNIQUE FJORDS AND THE ROYAL CAPITALS UNIQUE FJORDS & THE NORTH CAPE & UNIQUE NORTHERN CAPITALS
Q J j,. Y j, q.. Q J & j,. & x x. Q x q. ø. 2019 :. q - j Q J & 11 Y j,.. j,, q j q. : 10 x. 3 x - 1..,,. 1-10 ( ). / 2-10. : 02-06.19-12.06.19 23.06.19-03.07.19 30.06.19-10.07.19 07.07.19-17.07.19 14.07.19-24.07.19
More informationAn Example file... log.txt
# ' ' Start of fie & %$ " 1 - : 5? ;., B - ( * * B - ( * * F I / 0. )- +, * ( ) 8 8 7 /. 6 )- +, 5 5 3 2( 7 7 +, 6 6 9( 3 5( ) 7-0 +, => - +< ( ) )- +, 7 / +, 5 9 (. 6 )- 0 * D>. C )- +, (A :, C 0 )- +,
More informationT i t l e o f t h e w o r k : L a M a r e a Y o k o h a m a. A r t i s t : M a r i a n o P e n s o t t i ( P l a y w r i g h t, D i r e c t o r )
v e r. E N G O u t l i n e T i t l e o f t h e w o r k : L a M a r e a Y o k o h a m a A r t i s t : M a r i a n o P e n s o t t i ( P l a y w r i g h t, D i r e c t o r ) C o n t e n t s : T h i s w o
More informationLecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationMath 215 HW #9 Solutions
Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith
More informationMATH 235. Final ANSWERS May 5, 2015
MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an
More information(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.
1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III
More information! -., THIS PAGE DECLASSIFIED IAW EQ t Fr ra _ ce, _., I B T 1CC33ti3HI QI L '14 D? 0. l d! .; ' D. o.. r l y. - - PR Pi B nt 8, HZ5 0 QL
H PAGE DECAFED AW E0 2958 UAF HORCA UD & D m \ Z c PREMNAR D FGHER BOMBER ARC o v N C o m p R C DECEMBER 956 PREPARED B HE UAF HORCA DVO N HRO UGH HE COOPERAON O F HE HORCA DVON HEADQUARER UAREUR DEPARMEN
More informationMIDTERM 1 - SOLUTIONS
MIDTERM - SOLUTIONS MATH 254 - SUMMER 2002 - KUNIYUKI CHAPTERS, 2, GRADED OUT OF 75 POINTS 2 50 POINTS TOTAL ) Use either Gaussian elimination with back-substitution or Gauss-Jordan elimination to solve
More informationBayesian Nonparametric Point Estimation Under a Conjugate Prior
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x
More informationACO Comprehensive Exam October 14 and 15, 2013
1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for
More informationUniversal LaW of J^latGre: TboagbL tbe Solvent of fier Problems. CHICAGO, A U G U S T what thoy shall or shall not bollevo.
VO J^G: GO G 13 1892 O 142 " V z K x x' - * G -^ G x ( '! 1 < G O O G j 2 O 180 3 O O z OO V - O -OOK O O - O O - q G O - q x x K " 8 2 q 1 F O?" O O j j O 3 O - O O - G O ; 10000000 ; O G x G O O q! O
More informationVariance Function Estimation in Multivariate Nonparametric Regression
Variance Function Estimation in Multivariate Nonparametric Regression T. Tony Cai 1, Michael Levine Lie Wang 1 Abstract Variance function estimation in multivariate nonparametric regression is considered
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationSolvability of Linear Matrix Equations in a Symmetric Matrix Variable
Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type
More informationRANDOM MATRICES: LAW OF THE DETERMINANT
RANDOM MATRICES: LAW OF THE DETERMINANT HOI H. NGUYEN AND VAN H. VU Abstract. Let A n be an n by n random matrix whose entries are independent real random variables with mean zero and variance one. We
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationETIKA V PROFESII PSYCHOLÓGA
P r a ž s k á v y s o k á š k o l a p s y c h o s o c i á l n í c h s t u d i í ETIKA V PROFESII PSYCHOLÓGA N a t á l i a S l o b o d n í k o v á v e d ú c i p r á c e : P h D r. M a r t i n S t r o u
More informationMobius Inversion of Random Acyclic Directed Graphs
Mobius Inversion of Random Acyclic Directed Graphs By Joel E. Cohen Suppose a random acyclic digraph has adjacency matrix A with independent columns or independent rows. Then the mean Mobius inverse of
More informationGeneral Neoclassical Closure Theory: Diagonalizing the Drift Kinetic Operator
General Neoclassical Closure Theory: Diagonalizing the Drift Kinetic Operator E. D. Held eheld@cc.usu.edu Utah State University General Neoclassical Closure Theory:Diagonalizing the Drift Kinetic Operator
More informationRate-Optimal Detection of Very Short Signal Segments
Rate-Optimal Detection of Very Short Signal Segments T. Tony Cai and Ming Yuan University of Pennsylvania and University of Wisconsin-Madison July 6, 2014) Abstract Motivated by a range of applications
More informationThe Jordan canonical form
The Jordan canonical form Francisco Javier Sayas University of Delaware November 22, 213 The contents of these notes have been translated and slightly modified from a previous version in Spanish. Part
More informationFront-end. Organization of a Modern Compiler. Middle1. Middle2. Back-end. converted to control flow) Representation
register allocation instruction selection Code Low-level intermediate Representation Back-end Assembly array references converted into low level operations, loops converted to control flow Middle2 Low-level
More informationLemma 8: Suppose the N by N matrix A has the following block upper triangular form:
17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix
More informationAyuntamiento de Madrid
9 v vx-xvv \ ü - v q v ó - ) q ó v Ó ü " v" > - v x -- ü ) Ü v " ñ v é - - v j? j 7 Á v ü - - v - ü
More informationBlock-tridiagonal matrices
Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute
More informationLARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011
LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic
More informationEigenvectors and Reconstruction
Eigenvectors and Reconstruction Hongyu He Department of Mathematics Louisiana State University, Baton Rouge, USA hongyu@mathlsuedu Submitted: Jul 6, 2006; Accepted: Jun 14, 2007; Published: Jul 5, 2007
More informationDiscussion of High-dimensional autocovariance matrices and optimal linear prediction,
Electronic Journal of Statistics Vol. 9 (2015) 1 10 ISSN: 1935-7524 DOI: 10.1214/15-EJS1007 Discussion of High-dimensional autocovariance matrices and optimal linear prediction, Xiaohui Chen University
More informationSolutions Problem Set 8 Math 240, Fall
Solutions Problem Set 8 Math 240, Fall 2012 5.6 T/F.2. True. If A is upper or lower diagonal, to make det(a λi) 0, we need product of the main diagonal elements of A λi to be 0, which means λ is one of
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationPROBABILISTIC ANALYSIS OF THE GENERALISED ASSIGNMENT PROBLEM
PROBABILISTIC ANALYSIS OF THE GENERALISED ASSIGNMENT PROBLEM Martin Dyer School of Computer Studies, University of Leeds, Leeds, U.K. and Alan Frieze Department of Mathematics, Carnegie-Mellon University,
More informationRandom Methods for Linear Algebra
Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform
More informationON SUBGRAPH SIZES IN RANDOM GRAPHS
ON SUBGRAPH SIZES IN RANDOM GRAPHS Neil Calkin and Alan Frieze Department of Mathematics, Carnegie Mellon University, Pittsburgh, U.S.A. and Brendan D. McKay Department of Computer Science, Australian
More informationarxiv: v1 [math.mg] 19 Aug 2007
arxiv:0708.2668v1 [math.mg] 19 Aug 2007 ZONE AND DOUBLE ZONE DIAGRAMS IN ABSTRACT SPACES DANIEL REEM AND SIMEON REICH Abstract. A zone diagram of order n is a relatively new concept which was first defined
More informationMINIMAL NORMAL AND COMMUTING COMPLETIONS
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND
More informationHW2 Solutions
8.024 HW2 Solutions February 4, 200 Apostol.3 4,7,8 4. Show that for all x and y in real Euclidean space: (x, y) 0 x + y 2 x 2 + y 2 Proof. By definition of the norm and the linearity of the inner product,
More informationGoal: to construct some general-purpose algorithms for solving systems of linear Equations
Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More informationSVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)
Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationLecture 3 September 15, 2014
MIT 6.893: Algorithms and Signal Processing Fall 2014 Prof. Piotr Indyk Lecture 3 September 15, 2014 Scribe: Ludwig Schmidt 1 Introduction In this lecture, we build on the ideas from the previous two lectures
More informationLower Austria. The Big Travel Map. Big Travel Map of Lower Austria.
v v v :,000, v v v v, v j, Z ö V v! ö +4/4/000 000 @ : / : v v V, V,,000 v v v v v v 08 V, v j?, v V v v v v v v,000, V v V, v V V vv /Z, v / v,, v v V, v x 6,000 v v 00,000 v, x v U v ( ) j v, x q J J
More informationCONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren
CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,
More information! " # $! % & '! , ) ( + - (. ) ( ) * + / 0 1 2 3 0 / 4 5 / 6 0 ; 8 7 < = 7 > 8 7 8 9 : Œ Š ž P P h ˆ Š ˆ Œ ˆ Š ˆ Ž Ž Ý Ü Ý Ü Ý Ž Ý ê ç è ± ¹ ¼ ¹ ä ± ¹ w ç ¹ è ¼ è Œ ¹ ± ¹ è ¹ è ä ç w ¹ ã ¼ ¹ ä ¹ ¼ ¹ ±
More informationAlgebra I CP Final Exam Review
Class: Date: Algebra I CP Final Exam Review Multiple Choice Identify the choice that best completes the statement or answers the question. 1. Identify the graph that displays the height of a ping pong
More informationTHE LINDEBERG-LÉVY THEOREM FOR MARTINGALES1
THE LINDEBERG-LÉVY THEOREM FOR MARTINGALES1 PATRICK BILLINGSLEY The central limit theorem of Lindeberg [7] and Levy [3] states that if {mi, m2, } is an independent, identically distributed sequence of
More informationCommon-Knowledge / Cheat Sheet
CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]
More informationStochastic Comparisons of Order Statistics from Generalized Normal Distributions
A^VÇÚO 1 33 ò 1 6 Ï 2017 c 12 Chinese Journal of Applied Probability and Statistics Dec. 2017 Vol. 33 No. 6 pp. 591-607 doi: 10.3969/j.issn.1001-4268.2017.06.004 Stochastic Comparisons of Order Statistics
More informationLinear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations
Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationComputational and Statistical Boundaries for Submatrix Localization in a Large Noisy Matrix
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 2017 Computational and Statistical Boundaries for Submatrix Localization in a Large Noisy Matrix Tony Cai University
More informationLinear System Theory
Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40
More informationMatrixType of Some Algebras over a Field of Characteristic p
Journal of Algebra 251, 849 863 (2002 doi:10.1006/jabr.2001.9113 MatrixType of Some Algebras over a Field of Characteristic p Alexander Kemer Department of Mathematics, Ulyanovsk University, Ulyanovsk,
More informationc 2005 by Shreyas Sundaram. All rights reserved.
c 25 by Shreyas Sundaram All rights reserved OBSERVERS FOR LINEAR SYSTEMS WITH UNKNOWN INPUTS BY SHREYAS SUNDARAM BASc, University of Waterloo, 23 THESIS Submitted in partial fulfillment of the requirements
More informationVast Volatility Matrix Estimation for High Frequency Data
Vast Volatility Matrix Estimation for High Frequency Data Yazhen Wang National Science Foundation Yale Workshop, May 14-17, 2009 Disclaimer: My opinion, not the views of NSF Y. Wang (at NSF) 1 / 36 Outline
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationMath 1553 Worksheet 5.3, 5.5
Math Worksheet, Answer yes / no / maybe In each case, A is a matrix whose entries are real a) If A is a matrix with characteristic polynomial λ(λ ), then the - eigenspace is -dimensional b) If A is an
More informationE A S Y & E F F E C T I V E S K I N S O L U T I O N S
E A S Y & E F F E C T I V E S K I N S O L U T I O N S EASY & EFFECTIVE SKIN SOLUTIONS D ñ í q, á bj ó í; é, b ó. E q b, q ó q. N b b, á, q q,, ó ó á b. Aí PUREDERM, ñí bó á. P "H C", á 15 ñ x ó é. Uz á
More informationPlanning for Reactive Behaviors in Hide and Seek
University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science May 1995 Planning for Reactive Behaviors in Hide and Seek Michael B. Moore
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationCOMMUNITY DETECTION IN DEGREE-CORRECTED BLOCK MODELS
Submitted to the Annals of Statistics COMMUNITY DETECTION IN DEGREE-CORRECTED BLOCK MODELS By Chao Gao, Zongming Ma, Anderson Y. Zhang, and Harrison H. Zhou Yale University, University of Pennsylvania,
More informationMatrices A brief introduction
Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex
More informationPrimitive sets in a lattice
Primitive sets in a lattice Spyros. S. Magliveras Department of Mathematical Sciences Florida Atlantic University Boca Raton, FL 33431, U.S.A spyros@fau.unl.edu Tran van Trung Institute for Experimental
More informationDensity of States for Random Band Matrices in d = 2
Density of States for Random Band Matrices in d = 2 via the supersymmetric approach Mareike Lager Institute for applied mathematics University of Bonn Joint work with Margherita Disertori ZiF Summer School
More informationThis property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.
34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2
More informationPivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3
Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.
More informationSuperconcentration inequalities for centered Gaussian stationnary processes
Superconcentration inequalities for centered Gaussian stationnary processes Kevin Tanguy Toulouse University June 21, 2016 1 / 22 Outline What is superconcentration? Convergence of extremes (Gaussian case).
More informationPOSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS
POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination
More informationBackground: Lattices and the Learning-with-Errors problem
Background: Lattices and the Learning-with-Errors problem China Summer School on Lattices and Cryptography, June 2014 Starting Point: Linear Equations Easy to solve a linear system of equations A s = b
More informationA Strongly Polynomial Simplex Method for Totally Unimodular LP
A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method
More informationA note on the minimal volume of almost cubic parallelepipeds
A note on the minimal volume of almost cubic parallelepipeds Daniele Micciancio Abstract We prove that the best way to reduce the volume of the n-dimensional unit cube by a linear transformation that maps
More informationStochastic invariances and Lamperti transformations for Stochastic Processes
Stochastic invariances and Lamperti transformations for Stochastic Processes Pierre Borgnat, Pierre-Olivier Amblard, Patrick Flandrin To cite this version: Pierre Borgnat, Pierre-Olivier Amblard, Patrick
More informationLectures 6 7 : Marchenko-Pastur Law
Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n
More informationMark Gordon Low
Mark Gordon Low lowm@wharton.upenn.edu Address Department of Statistics, The Wharton School University of Pennsylvania 3730 Walnut Street Philadelphia, PA 19104-6340 lowm@wharton.upenn.edu Education Ph.D.
More informationLarge scale correlation detection
Large scale correlation detection Francesca Bassi 1,2 1 LSS CNRS SUELEC Univ aris Sud Gif-sur-Yvette, France Alfred O. Hero III 2 2 University of Michigan, EECS Department Ann Arbor, MI Abstract This work
More informationMatrix Operations: Determinant
Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant
More informationSupplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017
Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.
More informationUnit Roots in White Noise?!
Unit Roots in White Noise?! A.Onatski and H. Uhlig September 26, 2008 Abstract We show that the empirical distribution of the roots of the vector auto-regression of order n fitted to T observations of
More informationRevised Simplex Method
DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation
More informationMTH 5102 Linear Algebra Practice Final Exam April 26, 2016
Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write
More informationHere are proofs for some of the results about diagonalization that were presented without proof in class.
Suppose E is an 8 8 matrix. In what follows, terms like eigenvectors, eigenvalues, and eigenspaces all refer to the matrix E. Here are proofs for some of the results about diagonalization that were presented
More informationGuaranteed Rank Minimization via Singular Value Projection: Supplementary Material
Guaranteed Rank Minimization via Singular Value Projection: Supplementary Material Prateek Jain Microsoft Research Bangalore Bangalore, India prajain@microsoft.com Raghu Meka UT Austin Dept. of Computer
More informationAssumptions for the theoretical properties of the
Statistica Sinica: Supplement IMPUTED FACTOR REGRESSION FOR HIGH- DIMENSIONAL BLOCK-WISE MISSING DATA Yanqing Zhang, Niansheng Tang and Annie Qu Yunnan University and University of Illinois at Urbana-Champaign
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment 1 Caramanis/Sanghavi Due: Thursday, Feb. 7, 2013. (Problems 1 and
More informationCOUNTING INTEGER POINTS IN POLYHEDRA. Alexander Barvinok
COUNTING INTEGER POINTS IN POLYHEDRA Alexander Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html Let P R d be a polytope. We want to compute (exactly or approximately)
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationBlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas
BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block
More informationRole of diagonal tension crack in size effect of shear strength of deep beams
Fu M of Co Co Suu - A Fu M of Co - B. H. O,.( Ko Co Iu, Sou, ISBN 978-89-578-8-8 o of o o k z ff of of p m Y. Tk & T. Smomu Nok Uy of Tooy, N, Jp M. W Uym A Co. L., C, Jp ABSTACT: To fy ff of k popo o
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationOn the number of ways of writing t as a product of factorials
On the number of ways of writing t as a product of factorials Daniel M. Kane December 3, 005 Abstract Let N 0 denote the set of non-negative integers. In this paper we prove that lim sup n, m N 0 : n!m!
More information