Documentation for the Random Matrix Library
|
|
- Berniece Chandler
- 5 years ago
- Views:
Transcription
1 Documentation for the Random Matrix Library Øyvind Ryan September 25, 2009 This is the official documentation for the Random Matrix Library. The library contains functionality for computing the asymptotic and exact moments for many types of random matrices, such as Gaussian, Vandermonde, Hankel, Toeplitz, and Wishart matrices. The library also contains functionality for summing or multiplying such matrices with deterministic matrices, or similarly combining the matrices with other matrices of the same type, independent from it. Second order results are also included where applicable, i.e. expressions for covariances of traces are supported. Also, channel capacity estimation involving the mentioned matrices is supported. In the following, many formulas span across several lines, containing many terms. In such cases, one term is written for each line in order to avoid overflow on each line. Authors should change the formulas manually so that the full page width is utilized. The functions described here can be naturally split into several classes: Functions which generate latex code for the formulas in question. All these functions have names starting with latex. Functions which perfom numeric computations of the moments. All these functions have names starting with numeric. Some of the numeric routines perform convolution, some perform deconvolution. Those performing deconvolution end with deconv. The library also contains many functions not described here, but necessary in executing the functions described here. Only the functions described in this document should be considered public. 1 Moments involving finite Gaussian matrices 1.1 Product of a deterministic matrix and a Wishart matrix 1
2 thisfile=fopen( exfinitemultwishart.tex, w ); latexfinitemultwishart(4,thisfile); Produces the moments M p = E [ tr (( D 1 N XXH) p)] from the moments Dp = tr (D p ), where X is n N complex, standard, Gaussian, and D is n n. The output is stored in the file exfinitemultwishart.tex, and looks like this: M 1 = D 1 M 2 = D 2 M 3 = M 4 = +cd1 2 ( N 2 +3cD 2 D 1 +c 2 D1 3 ( N ( 2 + 2c + + ) D 3 ) D 4 c ) N 2 ( 4c + 4c N 2 D 2 2 ) D 3 D 1 +6c 2 D 2 D 2 1 +c 3 D 4 1 Here, c = n N. The corresponding numeric function is moms=numericfinitemultwishart(dmoms,n,n) where dmoms are the moments D p, moms are the moments M p. The corresponding numeric function for performing deconvolution is dmoms=numericfinitemultwishartdeconv(moms,n,n) 1.2 Product of a deterministic matrix and many independent Wishart matrices The numeric function moms=numericfinitemultwishartn(dmoms,n,nvector) computes the moments [ M p = E tr (( D 1 N 1 X 1 X H 1 1 N 2 X 2 X H 2 ) p )] 1 X k X H k N k 2
3 from the moments D p = tr(d p ) when D is n n and determinsitic, and X 1, X 2, X 3,..., X k are independent complex standard Gaussian n N 1, n,..., n N k, and where Nvector= [N 1 N 2 N 3 ], and k is the length of Nvector. 1.3 Product of a square deterministic matrix and a selfadjoint Gaussian matrix thisfile=fopen( exfinitemultgaussiansa.tex, w ); latexfinitemultgaussiansa(4,thisfile); Produces the moments M p [ (( = E tr 1 N DX ) p )] from the moments D p = tr (D p ), where X is N N selfadjoint, standard, Gaussian, and D is N N. The output is stored in the file exfinitemultgaussiansa.tex, and looks like 3
4 this: M 2 = 1 2 D2 1 M 4 = 1 4 N 2 D D 2D 2 1 M 6 = 1 8 N 2 D N 2 D 4 D N 2 D 5 D D2 2D D 3D 3 1 M 8 = N 4 D N 2 D 2 3D N 2 D 4 D 2 2 +N 2 D 4 D 3 D N 2 D 5 D 2 D N 2 D 6 D D3 2D D 3D 2 D D 4D 4 1 The corresponding numeric function is moms=numericfinitemultgaussiansa(dmoms,n) where dmoms are the moments D p, moms are the moments M p. Deconvolution is impossible in this case (unless it is assumed also that the odd moments of D vanish), so there is no corresponding function for it. 4
5 1.4 Product of a square deterministic matrix and many independent selfadjoint Gaussian matrices The numeric function moms=numericfinitemultgaussiansan(dmoms,n,k) computes the moments M p = E [ (( tr D 1 1 X 1 X 2 1 ) p )] N N N X k from the moments D p = tr(d p ) when D is N N and determinsitic, and X 1, X 2, X 3,..., X k are independent selfadjoint standard Gaussian N N matrices. 1.5 Sum of a deterministic matrix and a Gaussian matrix thisfile=fopen( exfiniteaddgaussian.tex, w ); latexfiniteaddgaussian(4,thisfile); Produces the moments M p = E [ tr (( 1 N (D + X)(D + X)H) p)] from the moments D p = tr (( 1 N DDH) p), where X is n N complex, standard, Gaussian, and D is n N The output is stored in the file exfiniteaddgaussian.tex, and 5
6 looks like this: M 1 = D 1 +1 M 2 = D 2 + (2 + 2c) D 1 + (1 + c) M 3 = D 3 + (3 + 3c) D 2 +3cD1 2 ( c + 3c ) N ( c + c ) N 2 M 4 = D 4 + (4 + 4c) D 3 D 1 +8cD 2 D ( c + 6c ) N 2 D 2 + ( 14c + 14c 2) D1 2 ( ) c + 24c 2 + 4c c + N 2 D 1 ( c + 6c 2 + c c ) N 2 Here, c = n N. The corresponding numeric function is moms=numericfiniteaddgaussian(dmoms,n,n,sigma) where dmoms are the moments D p, moms are the moments M p. The parameter sigma represents noise variance, and is optional. If present, the moments of M p = E [ tr (( 1 N (D + σx)(d + σx)h) p)] are computed. If sigma is not present, sigma=1 as above is assumed. The corresponding numeric function for performing deconvolution is dmoms=numericfiniteaddgaussiandeconv(moms,n,n,sigma) sigma is optional in the same way as for numeric convolution. Note that the noise variance is dropped in the latex expression above, since it can be applied outside the actual deconvolution step through an additional rescaling of the moments. 6
7 1.6 Sum of a square deterministic matrix and a selfadjoint Gaussian matrix thisfile=fopen( exfiniteaddgaussiansa.tex, w ); latexfiniteaddgaussiansa(4,thisfile); [ (( ) p )] 1 Produces the moments M p = E tr N (D + X) from the moments D p = (( ) p ) 1 tr N D, where X is N N selfadjoint, standard, Gaussian, and D is N N and deterministic. The output is stored in the file exfiniteaddgaussiansa.tex, and looks like this: M 1 = D 1 M 2 = D M 3 = D D 1 M 4 = D 4 +2D 2 +D ( ) 4 N 2 The corresponding numeric function is moms=numericfiniteaddgaussiansa(dmoms,n) where dmoms are the moments D p, moms are the moments M p. The corresponding numeric function for performing deconvolution is dmoms=numericfiniteaddgaussiandeconvsa(moms,n) 1.7 The combination 1 N X 1 DX 2 + X of independent Gaussian matrices thisfile=fopen( exfiniteaddgaussian3.tex, w ); latexfiniteaddgaussian3(4,thisfile); 7
8 Produces the moments [ (( 1 M p = E tr M ( 1 X 1 DX 2 + X)( 1 ) p )] X 1 DX 2 + X) H N N from the moments D p = tr (( DD H) p), where X is N M complex, standard, Gaussian, X 1 is N K complex, standard, Gaussian, X 2 is K M complex, standard, Gaussian, X, X 1, X 2 are independent, and D is K K and deterministic. The output is stored in the file exfiniteaddgaussian3.tex, and looks like this: M 1 = N 1 KD 1 +1 M 2 = ( N 2 M 1 K + N 1 K ) D 2 + ( N 2 K 2 + N 1 M 1 K 2) D ( 2N 1 K + 2M 1 K ) D 1 + ( 1 + NM 1) M 3 = ( 3N 3 M 2 K + N 3 K + 6N 2 M 1 K + N 1 M 2 K + N 1 K ) D 3 + ( 6N 3 M 1 K 2 + 6N 2 M 2 K 2 + 3N 2 K 2 + 3N 1 M 1 K 2) D 2 D 1 + ( N 3 M 2 K 3 + N 3 K 3 + 3N 2 M 1 K 3 + N 1 M 2 K 3) D ( 6N 2 M 1 K + 6N 1 M 2 K + 3N 1 K + 3M 1 K ) D 2 + ( 3N 2 M 2 K 2 + 3N 2 K 2 + 9N 1 M 1 K 2 + 3M 2 K 2) D ( 3N 1 M 2 K + 3N 1 K + 9M 1 K + 3NM 2 K ) D 1 + ( M NM 1 + N 2 M 2) M 4 = ( 20N 4 M 3 K + 16N 4 M 1 K + 61N 3 M 2 K + 5N 3 K + 16N 2 M 3 K + 20N 2 M 1 K + 5N 1 + ( 17N 4 M 2 K 2 + N 4 K N 3 M 3 K N 3 M 1 K N 2 M 2 K 2 + 2N 2 K 2 + N 1 + ( 44N 4 M 2 K 2 + 4N 4 K N 3 M 3 K N 3 M 1 K N 2 M 2 K 2 + 4N 2 K 2 + 4N + ( 16N 4 M 3 K N 4 M 1 K N 3 M 2 K 3 + 6N 3 K N 2 M 3 K N 2 M 1 K 3 + ( 5N 4 M 2 K 4 + N 4 K 4 + 5N 3 M 3 K 4 + 6N 3 M 1 K 4 + 6N 2 M 2 K 4 + N 1 M 3 K 4) D ( 44N 3 M 2 K + 4N 3 K + 44N 2 M 3 K + 44N 2 M 1 K + 44N 1 M 2 K + 4N 1 K + 4M 3 K + + ( 32N 3 M 3 K N 3 M 1 K N 2 M 2 K N 2 K N 1 M 3 K N 1 M 1 K + ( 20N 3 M 2 K 3 + 4N 3 K N 2 M 3 K N 2 M 1 K N 1 M 2 K 3 + 4M 3 K 3) D ( 16N 2 M 3 K + 20N 2 M 1 K + 60N 1 M 2 K + 6N 1 K + 20M 3 K + 16M 1 K + 6NM 2 K ) D + ( 30N 2 M 2 K 2 + 6N 2 K N 1 M 3 K N 1 M 1 K M 2 K 2 + 6NM 3 K 2) D ( 20N 1 M 2 K + 4N 1 K + 20M 3 K + 24M 1 K + 24NM 2 K + 4N 2 M 3 K ) D 1 + ( 5M NM 3 + 6NM 1 + 6N 2 M 2 + N 3 M 3) The corresponding numeric function is moms=numericfiniteaddgaussian3(dmoms,k,m,n,sigma) 8
9 where dmoms are the moments D p, moms are the moments M p. The parameter sigma represents noise variance, and is optional. If present, the moments of [ (( 1 M p = E tr M ( 1 X 1 DX 2 + σx)( 1 ) p )] X 1 DX 2 + σx) H N N are computed. If sigma is not present, sigma=1 as above is assumed. The corresponding numeric function for performing deconvolution is dmoms=numericfiniteaddgaussian3deconv(moms,k,m,n,sigma) sigma is optional in the same way as for numeric convolution. Note that the noise variance is dropped in the latex expression above, since it can be applied outside the actual deconvolution step through an additional rescaling of the moments. 2 The moments of many types of random matrices thisfile=fopen( exmoments.tex, w ); latexmoments(4, V,thisfile); Produces the limit moments M p = c lim N tr (( V H V ) p) when V is an N L L Vandermonde matrix with uniform phase distribution and c = lim N N. The output is stored in the file exmoments.tex, and looks like this: The corresponding numeric routine is M 1 = 1 M 2 = 2 M 3 = 5 M 4 = 44 3 moms=numericmoments(nummoments, V ) where moms are the moments M p. This function can also compute the moments of many other types of random matrices. Simply replace V with H : Hankel matrices T : Toeplitz matrices VV : Product of two independent Vandermonde matrices with uniform phase distribution MP : The Mar chenko Pastur law. 9
10 3 The second order moments of random matrices thisfile=fopen( ex2moments.tex, w ); latex2moments(6, V,thisfile); ( Produces the second order limit moments M i,j = c lim N LC i,j V H V ) when V is an N L Vandermonde matrix with uniform phase distribution and c = L lim N N. The output is stored in the file ex2moments.tex, and looks like this: M 2,2 = 4 3 M 2,3 = 8 The corresponding numeric routine is M 2,4 = M 3,3 = 51 moms=numeric2moments(nummoments, V ) where moms are the moments M p. Currently, uniform Vandermonde is the only matrix type which supports second order moments. 4 Moments involving Vandermonde matrices 4.1 Product of a deterministic matrix and a Vandermonde matrix with uniform phase distribution thisfile=fopen( exmultvanduniform.tex, w ); latexmultvanduniform(4,thisfile); Prouces the limit moments M p = c lim N tr (( V H VD ) p) from the moments D p = ctr (D p ) when V is an N L Vandermonde matrix with uniform phase L distribution, D is L L and c = lim N N. The output is stored in the file 10
11 exmultvanduniform.tex, and looks like this: M 1 = D 1 M 2 = D 2 +D1 2 M 3 = D 3 +3D 2 D 1 +D 3 1 M 4 = D 4 The corresponding numeric routine is moms=numericmultvanduniform(dmoms) D2 2 +4D 3 D 1 +6D 2 D 2 1 +D 4 1 where dmoms are the moments D p, moms are the moments M p. The corresponding numeric function for performing deconvolution is dmoms=numericmultvanduniformdeconv(moms) 4.2 Sum of a deterministic matrix and a Vandermonde matrix with uniform phase distribution thisfile=fopen( exaddvanduniform.tex, w ); latexaddvanduniform(4,thisfile); Produces the limit moments M p = c lim N tr (( V H V + D ) p) from the moments D p = tr (( 1 N DDH) p) when V is an N L Vandermonde matrix with L uniform phase distribution, D is L L and c = lim N The output is N. 11
12 stored in the file exaddvanduniform.tex, and looks like this: The corresponding numeric routine is M 1 = D 1 +1 moms=numericaddvanduniform(dmoms) M 2 = D 2 +2D 1 +2 M 3 = D 3 +3D 2 +6D 1 +5 M 4 = D 4 +4D 3 +10D 2 +2D D 1 where dmoms are the moments D p, moms are the moments M p. The corresponding numeric function for performing deconvolution is dmoms=numericaddvanduniformdeconv(moms) 4.3 Product of a deterministic matrix and a Vandermonde matrix with general phase distribution thisfile=fopen( exmultvand.tex, w ); latexmultvand(4,thisfile); Prouces the limit moments M p = c lim N tr (( V H VD ) p) from the moments D p = ctr (D p ) when V is an N L Vandermonde matrix with general phase L distribution, D is L L and c = lim N N. The output is stored in the file 12
13 exmultvand.tex, and looks like this: M 1 = D 1 M 2 = D 2 D1 2 +D1V 2 2 M 3 = D 3 3D 2 D 1 M 4 = D 4 +3D 2 D 1 V 2 +2D 3 1 3D 3 1V 2 +D 3 1V D D2 2V 2 4D 3 D 1 +4D 3 D 1 V 2 +12D 2 D1 2 18D 2 D1V D 2 D1V D4 1 The corresponding numeric routine is moms=numericmultvand(vmoms,dmoms) D4 1V 2 6D 4 1V 3 +D 4 1V 4 where vmoms are the moments V p, dmoms are the moments D p, and moms are the moments M p. The corresponding numeric function for performing deconvolution is dmoms=numericmultvanddeconv(vmoms,moms) Note that this function performs deconvolution in terms of dmoms. It is possible to rewrite this function so that it instead performs deconvolution in terms of vmoms. 13
14 4.4 Sum of a deterministic matrix and a Vandermonde matrix with general phase distribution thisfile=fopen( exaddvand.tex, w ); latexaddvand(4,thisfile); Produces the limit moments M p = c lim N tr (( V H V + D ) p) from the moments D p = tr (( 1 N DDH) p) when V is an N L Vandermonde matrix with general phase distribution with limit moments V p, D is L L and c = lim N N. L The output is stored in the file exaddvand.tex, and looks like this: M 1 = D 1 +1 The corresponding numeric routine is moms=numericaddvand(vmoms,dmoms) M 2 = D 2 +2D 1 +V 2 M 3 = D 3 +3D 2 +3D 1 V 2 +V 3 M 4 = D 4 +4D 3 +2D 2 +4D 2 V 2 2D D 2 1V 2 +4D 1 V 3 +V 4 where vmoms are the moments V p, dmoms are the moments D p, and moms are the moments M p. The corresponding numeric function for performing deconvolution is dmoms=numericaddvanddeconv(vmoms,moms) Note that this function performs deconvolution in terms of dmoms. It is possible to rewrite this function so that it instead performs deconvolution in terms of vmoms. 14
15 4.5 Product of two independent Vandermonde matrices with equal phase distribution thisfile=fopen( exindmom.tex, w ); latexindmom(4,thisfile); Produces the limit moments M p = c lim N tr (( ) V1 H V 2 V2 H p ) V 1 when V is an N L Vandermonde matrix with general phase distribution with limit moments L V p, and c = lim N N. The output is stored in the file exindmom.tex, and looks like this: M 1 = 1 +V 2 M 2 = 3 +6V 2 4V 3 +V 4 M 3 = 58 The corresponding numeric routine is moms=numericindmom(vmoms,dmoms) +123V 2 96V 3 +39V 4 9V 5 +V 6 M 4 = V V V 4 772V V 6 16V 7 +V 8 where vmoms are the moments V p. It is not possible to perform deconvolution in this case. 15
16 4.6 Product of two independent Vandermonde matrices with general phase distributions The command moms=numericmult2vand(vmoms1,vmoms2,c1,c2) produces the moments from the moments M p = lim N tr ( (V H 1 V 1 V H 2 V 2 ) p ) ( (V V p (1) = lim tr ) H p ) 1 V 1 N ( (V V p (2) = lim tr ) H p ) 2 V 2 N V 1 is assumed N 1 L with lim N lim N L N 1 (which are vmoms1), (which are vmoms2). = c 1, V 2 is assumed N 2 L with L N 2 = c 2. The lengths of vmoms1 and vmoms2 should be equal, and the number of output moments will be equal to this length. This method only exists as a numeric routine. The corresponding numeric function for performing deconvolution is vmoms2=numericmult2vanddeconv(vmoms1,moms) 4.7 Sum of two independent Vandermonde matrices with general phase distributions The command moms=numericadd2vand(vmoms1,vmoms2) produces the moments from the moments M p = lim N tr ( (V H 1 V 1 + V H 2 V 2 ) p ) ( (V V p (1) = lim tr ) H p ) 1 V 1 N ( (V V p (2) = lim tr ) H p ) 2 V 2 N V 1 is assumed N 1 L with lim N lim N L N 1 (which are vmoms1), (which are vmoms2). = c 1, V 2 is assumed N 2 L with L N 2 = c 2. The lengths of vmoms1 and vmoms2 should be equal, and the number of output moments will be equal to this length. This method only exists as a numeric routine. The corresponding numeric function for performing deconvolution is vmoms2=numericadd2vanddeconv(vmoms1,moms) 16
17 4.8 The second order moments of a product of a deterministic matrix and a Vandermonde matrix with uniform phase distribution thisfile=fopen( ex2multvanduniform.tex, w ); latex2multvanduniform(6,thisfile); ( Produces the second order limit moments M p = c lim N LC i,j D(N)V H V ) from the limit moments D p = c lim N tr (D p ) when V is an N L Vandermonde matrix with uniform phase distribution, D is L L and c = lim N N. L The output is stored in the file ex2multvanduniform.tex, and looks like this: M 2,2 = 4 3 D2 2 M 2,3 = 4D 3 D 2 +4D2D 2 1 M 2,4 = D D 4D 2 The corresponding numeric routine is D D 3 D 2 D 1 +8D 2 2D 2 1 M 3,3 = 6D3 2 +6D 4 D 2 moms=numeric2multvanduniform(dmoms) +3D D 3 D 2 D 1 +12D 2 2D 2 1 where dmoms are the moments D p, moms are the moments M p. 5 Moments involving free random variables Note that the following formulas can be obtained from the finite Gaussian/Wishart counterparts by dropping the trailing O(N 2 )-terms. When these terms are dropped, one has a numerically more efficient implementation used in the following. The following form for the latex output is therefore rarely used in computations. 17
18 5.1 Product of a deterministic matrix and the Mar chenko Pastur law thisfile=fopen( exmultmp.tex, w ); latexmultmp(4,thisfile); Produces the limit moments M p = c lim N E [ tr (( D 1 N XXH) p)] from the limit moments c lim N D p = tr (D p n ), where c = lim N N, D is n n and X is n N complex, standard, Gaussian. The output is stored in the file exmultmp.tex, and looks like this: M 1 = D 1 The corresponding numeric routine is moms=numericmultmp(dmoms) M 2 = D 2 +D1 2 M 3 = D 3 +3D 2 D 1 +D 3 1 M 4 = D 4 +2D2 2 +4D 3 D 1 +6D 2 D 2 1 +D 4 1 where dmoms are the moments D p, moms are the moments M p. The corresponding numeric function for performing deconvolution is dmoms=numericmultmpdeconv(moms) Note that this operation is nothing but the moment-cumulant formula. 5.2 Rectangular free convolution thisfile=fopen( exaddgaussian.tex, w ); latexaddgaussian(4,thisfile); Produces the limit moments [ tr M p = c lim N E (( 1 (D + X)(D + X)H N 18 ) p )]
19 from the limit moments c lim N D p = tr ( ( 1 N DD)p), where c = lim N n N, D is n N and X is n N complex, standard, Gaussian. The output is stored in the file exaddgaussian.tex, and looks like this: M 1 = D 1 +1 M 2 = D 2 + (2 + 2c) D 1 + (1 + c) M 3 = D 3 + (3 + 3c) D 2 +3cD ( 3 + 9c + 3c 2) D 1 + ( 1 + 3c + c 2) M 4 = D 4 + (4 + 4c) D 3 +8cD 2 D 1 + ( c + 6c 2) D 2 + ( 14c + 14c 2) D ( c + 24c 2 + 4c 3) D 1 + ( 1 + 6c + 6c 2 + c 3) Note that this operation actually uses a rewriting to a product of a deterministic matrix and the Mar chenko Pastur law. The corresponding numeric routine is moms=numericaddgaussian(dmoms,c,sigma) where dmoms are the moments D p, moms are the moments M p. The parameter sigma represents noise variance, and is optional. If present, the moments of [ (( ) p )] 1 M p = c lim E tr (D + σx)(d + σx)h N N are computed. If sigma is not present, sigma=1 as above is assumed. The corresponding numeric function for performing deconvolution is dmoms=numericaddgaussiandeconv(moms,c,sigma) sigma is optional in the same way as for numeric convolution. Note that the noise variance is dropped in the latex expression above, since it can be applied outside the actual deconvolution step through an additional rescaling of the moments. 19
20 Index latex2moments, 10 latex2multvanduniform, 17 latexaddgaussian, 18 latexaddvand, 14 latexaddvanduniform, 11 latexfiniteaddgaussian, 5 latexfiniteaddgaussian3, 7 latexfiniteaddgaussiansa, 7 latexfinitemultgaussiansa, 3 latexfinitemultwishart, 1 latexindmom, 15 latexmoments, 9 latexmultmp, 18 latexmultvand, 12 latexmultvanduniform, 10 numericmultvand, 13 numericmultvanddeconv, 13 numericmultvanduniform, 11 numericmultvanduniformdeconv, 11 numeric2moments, 10 numeric2multvanduniform, 17 numericadd2vand, 16 numericadd2vanddeconv, 16 numericaddgaussian, 19 numericaddgaussiandeconv, 19 numericaddvand, 14 numericaddvanddeconv, 14 numericaddvanduniform, 12 numericaddvanduniformdeconv, 12 numericfiniteaddgaussian, 6 numericfiniteaddgaussian3, 8 numericfiniteaddgaussian3deconv, 9 numericfiniteaddgaussiandeconv, 6 numericfiniteaddgaussiandeconvsa, 7 numericfiniteaddgaussiansa, 7 numericfinitemultgaussiansa, 4 numericfinitemultgaussiansan, 5 numericfinitemultwishart, 2 numericfinitemultwishartdeconv, 2 numericfinitemultwishartn, 2 numericindmom, 15 numericmoments, 9 numericmult2vand, 16 numericmult2vanddeconv, 16 numericmultmp, 18 numericmultmpdeconv, 18 20
Applications and fundamental results on random Vandermon
Applications and fundamental results on random Vandermonde matrices May 2008 Some important concepts from classical probability Random variables are functions (i.e. they commute w.r.t. multiplication)
More informationFree deconvolution for signal processing applications
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL., NO., JANUARY 27 Free deconvolution for signal processing applications Øyvind Ryan, Member, IEEE, Mérouane Debbah, Member, IEEE arxiv:cs.it/725v 4 Jan 27 Abstract
More informationRandom Matrices: Beyond Wigner and Marchenko-Pastur Laws
Random Matrices: Beyond Wigner and Marchenko-Pastur Laws Nathan Noiry Modal X, Université Paris Nanterre May 3, 2018 Wigner s Matrices Wishart s Matrices Generalizations Wigner s Matrices ij, (n, i, j
More informationRandom Matrix Theory Lecture 3 Free Probability Theory. Symeon Chatzinotas March 4, 2013 Luxembourg
Random Matrix Theory Lecture 3 Free Probability Theory Symeon Chatzinotas March 4, 2013 Luxembourg Outline 1. Free Probability Theory 1. Definitions 2. Asymptotically free matrices 3. R-transform 4. Additive
More informationChannel capacity estimation using free probability theory
Channel capacity estimation using free probability theory January 008 Problem at hand The capacity per receiving antenna of a channel with n m channel matrix H and signal to noise ratio ρ = 1 σ is given
More informationand A I L j T O S O L O LOWELL, MICHIGAN, THURSDAY, AUG. 1, 1935
Y D D Y 5 VD D Y D D - ( D K D ( > Kz j K D x j ; K D D K x z ( D K D ( ; ) ( ) V DY - j! ) : x x x j K D K j DY 95 Y-D Y D D j 5 4 V 89 j j j j 89 j 4998 9 j K 8 j V x j K x x 5 x x x j - K 4- -D K 4-
More informationDistribution of Eigenvalues of Weighted, Structured Matrix Ensembles
Distribution of Eigenvalues of Weighted, Structured Matrix Ensembles Olivia Beckwith 1, Steven J. Miller 2, and Karen Shen 3 1 Harvey Mudd College 2 Williams College 3 Stanford University Joint Meetings
More informationRandom Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg
Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles
More informationFree Probability, Sample Covariance Matrices and Stochastic Eigen-Inference
Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment
More informationRegression #4: Properties of OLS Estimator (Part 2)
Regression #4: Properties of OLS Estimator (Part 2) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #4 1 / 24 Introduction In this lecture, we continue investigating properties associated
More informationFree Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada
Free Probability Theory and Random Matrices Roland Speicher Queen s University Kingston, Canada We are interested in the limiting eigenvalue distribution of N N random matrices for N. Usually, large N
More informationNon white sample covariance matrices.
Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions
More informationSubquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation
Subquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation M A Hasan and C Negre Abstract We study Dickson bases for binary field representation Such representation
More informationarxiv:math/ v1 [math.pr] 12 Feb 2007
Submitted to the Annals of Applied Probability MULTIPLICATIVE FREE CONVOLUTION AND INFORMATION-PLUS-NOISE TYPE MATRICES arxiv:math/0702342v1 [math.pr] 12 Feb 2007 By Øyvind Ryan and Mérouane Debbah University
More informationPainlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE
Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,
More informationOn the distinguishability of random quantum states
1 1 Department of Computer Science University of Bristol Bristol, UK quant-ph/0607011 Distinguishing quantum states Distinguishing quantum states This talk Question Consider a known ensemble E of n quantum
More informationThe norm of polynomials in large random matrices
The norm of polynomials in large random matrices Camille Mâle, École Normale Supérieure de Lyon, Ph.D. Student under the direction of Alice Guionnet. with a significant contribution by Dimitri Shlyakhtenko.
More informationSelfadjoint Polynomials in Independent Random Matrices. Roland Speicher Universität des Saarlandes Saarbrücken
Selfadjoint Polynomials in Independent Random Matrices Roland Speicher Universität des Saarlandes Saarbrücken We are interested in the limiting eigenvalue distribution of an N N random matrix for N. Typical
More informationSupelec Randomness in Wireless Networks: how to deal with it?
Supelec Randomness in Wireless Networks: how to deal with it? Mérouane Debbah Alcatel-Lucent Chair on Flexible Radio merouane.debbah@supelec.fr The performance dilemma... La théorie, c est quand on sait
More informationOperator norm convergence for sequence of matrices and application to QIT
Operator norm convergence for sequence of matrices and application to QIT Benoît Collins University of Ottawa & AIMR, Tohoku University Cambridge, INI, October 15, 2013 Overview Overview Plan: 1. Norm
More informationLessons in Estimation Theory for Signal Processing, Communications, and Control
Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL
More informationTwo-dimensional Random Vectors
1 Two-dimensional Random Vectors Joint Cumulative Distribution Function (joint cd) [ ] F, ( x, ) P xand Properties: 1) F, (, ) = 1 ),, F (, ) = F ( x, ) = 0 3) F, ( x, ) is a non-decreasing unction 4)
More informationFreeness and the Transpose
Freeness and the Transpose Jamie Mingo (Queen s University) (joint work with Mihai Popa and Roland Speicher) ICM Satellite Conference on Operator Algebras and Applications Cheongpung, August 8, 04 / 6
More informationChapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory
Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection
More informationFrom random matrices to free groups, through non-crossing partitions. Michael Anshelevich
From random matrices to free groups, through non-crossing partitions Michael Anshelevich March 4, 22 RANDOM MATRICES For each N, A (N), B (N) = independent N N symmetric Gaussian random matrices, i.e.
More informationFree Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada
Free Probability Theory and Random Matrices Roland Speicher Queen s University Kingston, Canada What is Operator-Valued Free Probability and Why Should Engineers Care About it? Roland Speicher Queen s
More informationReview of Probability
Review of robabilit robabilit Theor: Man techniques in speech processing require the manipulation of probabilities and statistics. The two principal application areas we will encounter are: Statistical
More informationRandom matrices: Distribution of the least singular value (via Property Testing)
Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued
More informationFREE PROBABILITY THEORY AND RANDOM MATRICES
FREE PROBABILITY THEORY AND RANDOM MATRICES ROLAND SPEICHER Abstract. Free probability theory originated in the context of operator algebras, however, one of the main features of that theory is its connection
More informationAdaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.
Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is
More informationOpen Issues on the Statistical Spectrum Characterization of Random Vandermonde Matrices
Open Issues on the Statistical Spectrum Characterization of Random Vandermonde Matrices Giusi Alfano, Mérouane Debbah, Oyvind Ryan Abstract Recently, analytical methods for finding moments of random Vandermonde
More informationKinetic Theory 1 / Probabilities
Kinetic Theory 1 / Probabilities 1. Motivations: statistical mechanics and fluctuations 2. Probabilities 3. Central limit theorem 1 The need for statistical mechanics 2 How to describe large systems In
More informationJoint ] X 5) P[ 6) P[ (, ) = y 2. x 1. , y. , ( x, y ) 2, (
Two-dimensional Random Vectors Joint Cumulative Distrib bution Functio n F, (, ) P[ and ] Properties: ) F, (, ) = ) F, 3) F, F 4), (, ) = F 5) P[ < 6) P[ < (, ) is a non-decreasing unction (, ) = F ( ),,,
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2
Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationFree Probability Theory and Non-crossing Partitions. Roland Speicher Queen s University Kingston, Canada
Free Probability Theory and Non-crossing Partitions Roland Speicher Queen s University Kingston, Canada Freeness Definition [Voiculescu 1985]: Let (A, ϕ) be a non-commutative probability space, i.e. A
More informationOn the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA
On the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA Ralf R. Müller Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) Lehrstuhl für Digitale Übertragung 13 December 2014 1. Introduction
More informationIn manycomputationaleconomicapplications, one must compute thede nite n
Chapter 6 Numerical Integration In manycomputationaleconomicapplications, one must compute thede nite n integral of a real-valued function f de ned on some interval I of
More informationDiscretization of SDEs: Euler Methods and Beyond
Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo
More informationErgodic and Outage Capacity of Narrowband MIMO Gaussian Channels
Ergodic and Outage Capacity of Narrowband MIMO Gaussian Channels Yang Wen Liang Department of Electrical and Computer Engineering The University of British Columbia April 19th, 005 Outline of Presentation
More informationarxiv: v1 [math-ph] 19 Oct 2018
COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]
More informationL3: Review of linear algebra and MATLAB
L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationMultiple Antennas for MIMO Communications - Basic Theory
Multiple Antennas for MIMO Communications - Basic Theory 1 Introduction The multiple-input multiple-output (MIMO) technology (Fig. 1) is a breakthrough in wireless communication system design. It uses
More informationParallel Additive Gaussian Channels
Parallel Additive Gaussian Channels Let us assume that we have N parallel one-dimensional channels disturbed by noise sources with variances σ 2,,σ 2 N. N 0,σ 2 x x N N 0,σ 2 N y y N Energy Constraint:
More informationEigenvalues and Singular Values of Random Matrices: A Tutorial Introduction
Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National
More informationMath 1314 Week #14 Notes
Math 3 Week # Notes Section 5.: A system of equations consists of two or more equations. A solution to a system of equations is a point that satisfies all the equations in the system. In this chapter,
More informationApplications of Random Matrix Theory in Wireless Underwater Communication
Applications of Random Matrix Theory in Wireless Underwater Communication Why Signal Processing and Wireless Communication Need Random Matrix Theory Atulya Yellepeddi May 13, 2013 18.338- Eigenvalues of
More informationTruncations of Haar distributed matrices and bivariate Brownian bridge
Truncations of Haar distributed matrices and bivariate Brownian bridge C. Donati-Martin Vienne, April 2011 Joint work with Alain Rouault (Versailles) 0-0 G. Chapuy (2007) : σ uniform on S n. Define for
More informationDecomposable and Directed Graphical Gaussian Models
Decomposable Decomposable and Directed Graphical Gaussian Models Graphical Models and Inference, Lecture 13, Michaelmas Term 2009 November 26, 2009 Decomposable Definition Basic properties Wishart density
More informationGaussian Random Variables Why we Care
Gaussian Random Variables Why we Care I Gaussian random variables play a critical role in modeling many random phenomena. I By central limit theorem, Gaussian random variables arise from the superposition
More informationExponential tail inequalities for eigenvalues of random matrices
Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify
More informationFluctuations of Random Matrices and Second Order Freeness
Fluctuations of Random Matrices and Second Order Freeness james mingo with b. collins p. śniady r. speicher SEA 06 Workshop Massachusetts Institute of Technology July 9-14, 2006 1 0.4 0.2 0-2 -1 0 1 2-2
More informationthe robot in its current estimated position and orientation (also include a point at the reference point of the robot)
CSCI 4190 Introduction to Robotic Algorithms, Spring 006 Assignment : out February 13, due February 3 and March Localization and the extended Kalman filter In this assignment, you will write a program
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationIntroduction to Matrix Algebra and the Multivariate Normal Distribution
Introduction to Matrix Algebra and the Multivariate Normal Distribution Introduction to Structural Equation Modeling Lecture #2 January 18, 2012 ERSH 8750: Lecture 2 Motivation for Learning the Multivariate
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationThe concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.
The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes
More informationMathematical methods in communication June 16th, Lecture 12
2- Mathematical methods in communication June 6th, 20 Lecture 2 Lecturer: Haim Permuter Scribe: Eynan Maydan and Asaf Aharon I. MIMO - MULTIPLE INPUT MULTIPLE OUTPUT MIMO is the use of multiple antennas
More informationMinimization of Quadratic Forms in Wireless Communications
Minimization of Quadratic Forms in Wireless Communications Ralf R. Müller Department of Electronics & Telecommunications Norwegian University of Science & Technology, Trondheim, Norway mueller@iet.ntnu.no
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More information3. Array and Matrix Operations
3. Array and Matrix Operations Almost anything you learned about in your linear algebra classmatlab has a command to do. Here is a brief summary of the most useful ones for physics. In MATLAB matrices
More information. a m1 a mn. a 1 a 2 a = a n
Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by
More informationA note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao
A note on a Marčenko-Pastur type theorem for time series Jianfeng Yao Workshop on High-dimensional statistics The University of Hong Kong, October 2011 Overview 1 High-dimensional data and the sample covariance
More informationA Generalization of Wigner s Law
A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean
More informationToeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References
Toeplitz matrices Niranjan U N NITK, Surathkal May 12, 2010 Niranjan U N (NITK, Surathkal) Linear Algebra May 12, 2010 1 / 15 1 Definition Toeplitz matrix Circulant matrix 2 Toeplitz theory Boundedness
More informationMultivariate Gaussian Analysis
BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density
More informationUser Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series
User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series Hannes Helgason, Vladas Pipiras, and Patrice Abry June 2, 2011 Contents 1 Organization
More informationSecond Order Freeness and Random Orthogonal Matrices
Second Order Freeness and Random Orthogonal Matrices Jamie Mingo (Queen s University) (joint work with Mihai Popa and Emily Redelmeier) AMS San Diego Meeting, January 11, 2013 1 / 15 Random Matrices X
More informationOn corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR
Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional
More information( nonlinear constraints)
Wavelet Design & Applications Basic requirements: Admissibility (single constraint) Orthogonality ( nonlinear constraints) Sparse Representation Smooth functions well approx. by Fourier High-frequency
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationValerio Cappellini. References
CETER FOR THEORETICAL PHYSICS OF THE POLISH ACADEMY OF SCIECES WARSAW, POLAD RADOM DESITY MATRICES AD THEIR DETERMIATS 4 30 SEPTEMBER 5 TH SFB TR 1 MEETIG OF 006 I PRZEGORZAłY KRAKÓW Valerio Cappellini
More informationDISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES
DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz
More informationDensity Modeling and Clustering Using Dirichlet Diffusion Trees
p. 1/3 Density Modeling and Clustering Using Dirichlet Diffusion Trees Radford M. Neal Bayesian Statistics 7, 2003, pp. 619-629. Presenter: Ivo D. Shterev p. 2/3 Outline Motivation. Data points generation.
More informationKinetic Theory 1 / Probabilities
Kinetic Theory 1 / Probabilities 1. Motivations: statistical mechanics and fluctuations 2. Probabilities 3. Central limit theorem 1 Reading check Main concept introduced in first half of this chapter A)Temperature
More informationStochastic Optimization One-stage problem
Stochastic Optimization One-stage problem V. Leclère September 28 2017 September 28 2017 1 / Déroulement du cours 1 Problèmes d optimisation stochastique à une étape 2 Problèmes d optimisation stochastique
More informationGaussian Models (9/9/13)
STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate
More information14 - Gaussian Stochastic Processes
14-1 Gaussian Stochastic Processes S. Lall, Stanford 211.2.24.1 14 - Gaussian Stochastic Processes Linear systems driven by IID noise Evolution of mean and covariance Example: mass-spring system Steady-state
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationLecture 2. Spring Quarter Statistical Optics. Lecture 2. Characteristic Functions. Transformation of RVs. Sums of RVs
s of Spring Quarter 2018 ECE244a - Spring 2018 1 Function s of The characteristic function is the Fourier transform of the pdf (note Goodman and Papen have different notation) C x(ω) = e iωx = = f x(x)e
More informationExample 1 describes the results from analyzing these data for three groups and two variables contained in test file manova1.tf3.
Simfit Tutorials and worked examples for simulation, curve fitting, statistical analysis, and plotting. http://www.simfit.org.uk MANOVA examples From the main SimFIT menu choose [Statistcs], [Multivariate],
More informationTP computing lab - Integrating the 1D stationary Schrödinger eq
TP computing lab - Integrating the 1D stationary Schrödinger equation September 21, 2010 The stationary 1D Schrödinger equation The time-independent (stationary) Schrödinger equation is given by Eψ(x)
More informationEigenvalue Statistics for Toeplitz and Circulant Ensembles
Eigenvalue Statistics for Toeplitz and Circulant Ensembles Murat Koloğlu 1, Gene Kopp 2, Steven J. Miller 1, and Karen Shen 3 1 Williams College 2 University of Michigan 3 Stanford University http://www.williams.edu/mathematics/sjmiller/
More informationStatistical methods. Mean value and standard deviations Standard statistical distributions Linear systems Matrix algebra
Statistical methods Mean value and standard deviations Standard statistical distributions Linear systems Matrix algebra Statistical methods Generating random numbers MATLAB has many built-in functions
More informationLarge sample covariance matrices and the T 2 statistic
Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and
More informationLecture 1: Systems of linear equations and their solutions
Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More informationConcentration Inequalities for Random Matrices
Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic
More informationMath 110 HW 3 solutions
Math 0 HW 3 solutions May 8, 203. For any positive real number r, prove that x r = O(e x ) as functions of x. Suppose r
More informationUnderstanding the Differences between LS Algorithms and Sequential Filters
Understanding the Differences between LS Algorithms and Sequential Filters In order to perform meaningful comparisons between outputs from a least squares (LS) orbit determination algorithm and orbit determination
More informationDegrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages
Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege
More informationRandom Matrix: From Wigner to Quantum Chaos
Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution
More informationLectures 6 7 : Marchenko-Pastur Law
Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n
More informationPDEs in Image Processing, Tutorials
PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationCHAPTER 8: Matrices and Determinants
(Exercises for Chapter 8: Matrices and Determinants) E.8.1 CHAPTER 8: Matrices and Determinants (A) means refer to Part A, (B) means refer to Part B, etc. Most of these exercises can be done without a
More informationFree Probability Theory and Random Matrices
Free Probability Theory and Random Matrices R. Speicher Department of Mathematics and Statistics Queen s University, Kingston ON K7L 3N6, Canada speicher@mast.queensu.ca Summary. Free probability theory
More information