Constructing matrices with optimal block coherence
|
|
- Hilary Rice
- 5 years ago
- Views:
Transcription
1 Constructing matrices with optimal block coherence Andrew Thompson (Duke University) Joint work with Robert Calderbank (Duke) and Yao Xie (Georgia Tech) SIAM Annual Meeting, Chicago July
2 Outline Background: The subspace packing problem An equivalent notion: block coherence Who is interested in this?
3 Outline Background: The subspace packing problem An equivalent notion: block coherence Who is interested in this? Prior work: Lower bound on block coherence Deterministic constructions achieving the lower bound
4 Outline Background: The subspace packing problem An equivalent notion: block coherence Who is interested in this? Prior work: Lower bound on block coherence Deterministic constructions achieving the lower bound Our work: Almost optimal deterministic constructions with more subspaces Analysis of block coherence for random matrices
5 Subspace packing Let{S 1,S 2,...,S m } be r-dimensional subspaces of C n ; r n.
6 Subspace packing Let{S 1,S 2,...,S m } be r-dimensional subspaces of C n ; r n. S i G(n,r): Grassmann manifold
7 Subspace packing Let{S 1,S 2,...,S m } be r-dimensional subspaces of C n ; r n. S i G(n,r): Grassmann manifold Optimal packings: maximize the minimum distance between subspaces, for some distance metric: Grassmann packing
8 Subspace packing Let{S 1,S 2,...,S m } be r-dimensional subspaces of C n ; r n. S i G(n,r): Grassmann manifold Optimal packings: maximize the minimum distance between subspaces, for some distance metric: Grassmann packing LetA i C n r be an orthonormal basis for S i, i = 1,2,...,m.
9 Distance metrics Chordal distance: [d C (S i,s j )] 2 := r A ia j 2 F = r where {λ i } are the singular values of A ia j, λ 1 λ 2... λ r. r i=1 λ 2 i,
10 Distance metrics Chordal distance: [d C (S i,s j )] 2 := r A ia j 2 F = r where {λ i } are the singular values of A ia j, λ 1 λ 2... λ r. Spectral distance: r i=1 λ 2 i, [d S (S i,s j )] 2 := 1 A i A j 2 2 = 1 λ2 1.
11 Block coherence Let the matrix A be the concatenation of the orthonormal bases for {S 1,S 2,...,S m }: ] A = [A 1 A 2 A m, (1) A i C n r, i {1,2,...,m}, (2)
12 Block coherence Let the matrix A be the concatenation of the orthonormal bases for {S 1,S 2,...,S m }: ] A = [A 1 A 2 A m, (3) A i C n r, i {1,2,...,m}, (4) Define µ(a), the (worst-case) block coherence of A to be µ(a) := max i j A ia j 2.
13 Block coherence Let the matrix A be the concatenation of the orthonormal bases for {S 1,S 2,...,S m }: ] A = [A 1 A 2 A m, (5) A i C n r, i {1,2,...,m}, (6) Define µ(a), the (worst-case) block coherence of A to be µ(a) := max i j A ia j 2. Ifmr n,acan have orthonormal columns = µ(a) = 0.
14 Block coherence Let the matrix A be the concatenation of the orthonormal bases for {S 1,S 2,...,S m }: ] A = [A 1 A 2 A m, (7) A i C n r, i {1,2,...,m}, (8) Define µ(a), the (worst-case) block coherence of A to be µ(a) := max i j A ia j 2. Ifmr n,acan have orthonormal columns = µ(a) = 0. If2r > n, µ(a) = 1 since any two subspaces have nontrivial intersection.
15 Applications Needed in recovery guarantees for one-step group thresholding in compressed sensing: gives optimal order guarantees for deterministic matrices. Blind sensing of multiband signals DNA microarrays Medical imaging, e.g. ECG and EEG/MEG brain imaging Group model selection in statistics
16 Applications Needed in recovery guarantees for one-step group thresholding in compressed sensing: gives optimal order guarantees for deterministic matrices. Blind sensing of multiband signals DNA microarrays Medical imaging, e.g. ECG and EEG/MEG brain imaging Group model selection in statistics Special case: multiple measurement vector (MMV) model. Sensor networks and MIMO
17 Applications Needed in recovery guarantees for one-step group thresholding in compressed sensing: gives optimal order guarantees for deterministic matrices. Blind sensing of multiband signals DNA microarrays Medical imaging, e.g. ECG and EEG/MEG brain imaging Group model selection in statistics Special case: multiple measurement vector (MMV) model. Sensor networks and MIMO Grassmann packings: Multiple-antenna (MIMO) communication systems
18 Known bounds on packing distance Rankin bound for chordal distance: (Conway/Hardin/Sloane 1996) min i j [d C(S i,s j )] 2 r(n r) n m m 1, with equality if all pairs of subspaces are equidistant.
19 Known bounds on packing distance Rankin bound for chordal distance: (Conway/Hardin/Sloane 1996) min i j [d C(S i,s j )] 2 r(n r) n m m 1, with equality if all pairs of subspaces are equidistant. Bound for spectral distance: (Dhillon/Heath/Strohmer/Tropp 2008, Lemmens/Seidel 1973) min i j [d S(S i,s j )] 2 n r n m m 1, with equality if the subspaces are equi-isoclinic: all {λ i } among alla i A j for i j are equal in magnitude.
20 Lower bound on block coherence Lower bound on block coherence: (Lemmens/Seidel 1973) µ(a) mr n n(m 1), with equality if all {λ i } among alla ia j for i j are equal in modulus.
21 Lower bound on block coherence Lower bound on block coherence: (Lemmens/Seidel 1973) µ(a) mr n n(m 1), with equality if all {λ i } among alla ia j for i j are equal in modulus. Proof: The spectral distance bound gives 1 max i j A i A j 2 2 n r n which rearranges to give m m 1, max i j A ia j 2 2 n(m 1) m(n r) n(m 1) = mr n n(m 1).
22 Lower bound on block coherence Lower bound on block coherence: (Lemmens/Seidel 1973) µ(a) mr n n(m 1), with equality if all {λ i } among alla ia j for i j are equal in modulus. Proof: The spectral distance bound gives 1 max i j A i A j 2 2 n r n which rearranges to give m m 1, max i j A ia j 2 2 n(m 1) m(n r) n(m 1) Extends the Welch bound (1974) to r > 1. = mr n n(m 1).
23 Can the bound be achieved? For r = 1, the bound is achieved if A is an equiangular tight frame (ETF): (Welch 1974) The columns of A are unit norm. The inner products between pairs of different columns are equal in modulus. The columns form a tight frame, that is AA = (mr/n)i.
24 Can the bound be achieved? For r = 1, the bound is achieved if A is an equiangular tight frame (ETF): (Welch 1974) The columns of A are unit norm. The inner products between pairs of different columns are equal in modulus. The columns form a tight frame, that is AA = (mr/n)i. Several infinite families of ETFs are known (Tropp 2005, Fickus/Mixon/Tremain 2012, Jasper/Mixon/Fickus 2013).
25 Can the bound be achieved? For r = 1, the bound is achieved if A is an equiangular tight frame (ETF): (Welch 1974) The columns of A are unit norm. The inner products between pairs of different columns are equal in modulus. The columns form a tight frame, that is AA = (mr/n)i. Several infinite families of ETFs are known (Tropp 2005, Fickus/Mixon/Tremain 2012, Jasper/Mixon/Fickus 2013). For r > 1, some small examples are known (Conway/Hardin/Sloane 1996) and numerical methods have been proposed to approximately construct optimal packings (Dhillon/Heath/Strohmer/Tropp 2008).
26 An optimal construction Kronecker product construction: (Lemmens/Seidel 1973, CTX 2013) LetA = P Q wherep C (n/r) m is an ETF and Q C r r is a unitary matrix. Then the columns in each block are orthonormal, and µ(a) = mr n n(m 1).
27 An optimal construction Kronecker product construction: (Lemmens/Seidel 1973, CTX 2013) LetA = P Q wherep C (n/r) m is an ETF and Q C r r is a unitary matrix. Then the columns in each block are orthonormal, and mr n µ(a) = n(m 1). ] Writing P = [p 1 p 2... p m for the columns of P, the proof hinges on the fact that A ia j 2 = p i,p j.
28 An optimal construction Kronecker product construction: (Lemmens/Seidel 1973, CTX 2013) LetA = P Q wherep C (n/r) m is an ETF and Q C r r is a unitary matrix. Then the columns in each block are orthonormal, and mr n µ(a) = n(m 1). ] Writing P = [p 1 p 2... p m for the columns of P, the proof hinges on the fact that A ia j 2 = p i,p j. For every infinite family of ETFs (e.g. Steiner/Kirkman), this gives an infinite family of Grassmann packings.
29 How many subspaces in the packing? Subspace bound: (Lemmens/Seidel 1973) The number of r-dimensional equidistant subspaces in C n cannot exceedn 2 ; the number of equi-isoclinic subspaces in C n cannot exceedn 2 r 2 +1.
30 How many subspaces in the packing? Subspace bound: (Lemmens/Seidel 1973) The number of r-dimensional equidistant subspaces in C n cannot exceedn 2 ; the number of equi-isoclinic subspaces in C n cannot exceedn 2 r If the ETF P is(n/r) (n/r) 2, then we obtain m = (n/r) 2.
31 How many subspaces in the packing? Subspace bound: (Lemmens/Seidel 1973) The number of r-dimensional equidistant subspaces in C n cannot exceedn 2 ; the number of equi-isoclinic subspaces in C n cannot exceedn 2 r If the ETF P is(n/r) (n/r) 2, then we obtain m = (n/r) 2. Furthermore, the best known infinite families of ETFs scale asn n 3 2 = the number of subspaces is limited to (n/r) 3 2.
32 Almost optimal packings Unions of orthonormal bases: (CTX 2013) Suppose A is a union of unitary matrices. Then µ(a) r n.
33 Almost optimal packings Unions of orthonormal bases: (CTX 2013) Suppose A is a union of unitary matrices. Then µ(a) r n. Kronecker product construction: (CTX 2013) LetA = P Q wherep C (n/r) m is a concatenation of unitary matrices such that all inner products between columns in different blocks have magnitude r n, and where Q C r r is a unitary matrix. Then A is a concatenation of unitary matrices, and µ(a) = r n.
34 Almost optimal packings... Kronecker product construction 2: (CTX 2013) LetA = P Q wherep C (n/r) m is a concatenation of unitary matrices such that all inner products between columns in different blocks have magnitude r n, and where Q C r r is a unitary matrix. Then A is a concatenation of unitary matrices, and µ(a) = r n.
35 Almost optimal packings... Kronecker product construction 2: (CTX 2013) LetA = P Q wherep C (n/r) m is a concatenation of unitary matrices such that all inner products between columns in different blocks have magnitude r n, and where Q C r r is a unitary matrix. Then A is a concatenation of unitary matrices, and µ(a) = r n. Several families of such P exist: e.g. discrete chirp, Gabor, Kerdock.
36 Almost optimal packings... Kronecker product construction 2: (CTX 2013) LetA = P Q wherep C (n/r) m is a concatenation of unitary matrices such that all inner products between columns in different blocks have magnitude r n, and where Q C r r is a unitary matrix. Then A is a concatenation of unitary matrices, and µ(a) = r n. Several families of such P exist: e.g. discrete chirp, Gabor, Kerdock. P C (n/r) (n/r)2 = m = (n/r) 2.
37 Block coherence of random subspaces Suppose we havemrandom subspaces S i of R n, i = 1,2,...,m, each distributed i.i.d. uniformly on G(n,r).
38 Block coherence of random subspaces Suppose we havemrandom subspaces S i of R n, i = 1,2,...,m, each distributed i.i.d. uniformly ong(n,r). Each A i is a random orthogonal matrix: its distribution is rotation-invariant.
39 Block coherence of random subspaces Suppose we havemrandom subspaces S i of R n, i = 1,2,...,m, each distributed i.i.d. uniformly ong(n,r). Each A i is a random orthogonal matrix: its distribution is rotation-invariant. A i can be obtained as the singular vectors of a Gaussian matrix.
40 Block coherence of random subspaces Suppose we havemrandom subspaces S i of R n, i = 1,2,...,m, each distributed i.i.d. uniformly ong(n,r). Each A i is a random orthogonal matrix: its distribution is rotation-invariant. A i can be obtained as the singular vectors of a Gaussian matrix. We consider the distribution of(λ 1,λ 2,...,λ r ), the squared singular values of A ia j.
41 A joint distribution fora i A j (Absil/Edelman/Koev 2006) Ifn 2r, (λ 1,λ 2,...,λ r ) R r + follow the multivariate beta ( distribution Beta r r, ) n r 2 2, with pdf r f(λ 1,λ 2,...,λ r ) = c n,r (λ i λ j ) λ 1 2 i (1 λ i ) 1 2 (n 2r 1), where c n,r := i<j i=1 ( π 1 2 r2 Γ n ) r 2 [ ( r )] 2Γr ( n r ) Γr 2 2 and the multivariate gamma function Γ r (p) is defined as r Γ r (p) := π 1 4 r(r 1) Γ j=1 ( p+ 1 j ). 2
42 An asymptotic bound on µ(a) Let(m,n,r) such that r n β.
43 An asymptotic bound on µ(a) Let(m,n,r) such that r β. n Asymptotic bound: (CTX 2013) LetAbe formed from i.i.d. random orthogonal matrices. Given β (0,1/2), there exists a small constant â(β) such that P { [µ(a)] 2 â(β) β +ǫ } 0.
44 An asymptotic bound on µ(a) Let(m,n,r) such that r β. n Asymptotic bound: (CTX 2013) LetAbe formed from i.i.d. random orthogonal matrices. Given β (0,1/2), there exists a small constant â(β) such that P { [µ(a)] 2 â(β) β +ǫ } 0. â(β) is the solution in 2 a < 1/β to the equation ( ) 1 2β βlna+ ln(1 aβ) (1 β)ln(1 β) = 0. 2
45 An asymptotic bound on µ(a)... The factorâ(β) for random subspaces: theoretical upper bound (blue) and the empirical average from1000 trials with n = 1000 and m = (n/r) 2 (red).
46 Another analysis of chordal distance (Bodmann, 2013) Same random subspace model Shows that 1 r A ia j 2 F β There is no multiplicative constant This fits! Multivariate beta converges to a fixed distribution asymptotically: expected and largest squared singular values remain different.
47 Overview of proof 1 Bound the pdf of λ 1 by a univariate beta density...
48 Overview of proof 1 Bound the pdf of λ 1 by a univariate beta density... LetRbe the region of (r 1)-dimensional space λ i 0; i = 2,3,...,r, and given λ 1 > 0, letr λ1 be the sub-region of R consisting of all(λ 2,...,λ r ) such that λ 1 λ 2,λ 3,...,λ r 0. R λ1 f(λ 1 ) = c n,r λ (1 λ 1 ) 1 2 (n 2r 1) i<j (λ i λ j ) r i=2 λ 1 2 i (1 λ i ) 1 2 (n 2r 1) dλ i
49 Overview of proof 2 = f(λ 1 ) = c n,r λ r (1 λ 1 ) 1 2 (n 2r 1) R λ1 2 i<j (λ i λ j ) r i=2 λ 1 2 i (1 λ i ) 1 2 (n 2r 1) dλ i
50 Overview of proof 2 R = f(λ 1 ) c n,r λ r (1 λ 1 ) 1 2 (n 2r 1) 2 i<j (λ i λ j ) r i=2 λ 1 2 i (1 λ i ) 1 2 (n 2r 1) dλ i
51 Overview of proof 2 R = f(λ 1 ) c n,r λ r (1 λ 1 ) 1 2 (n 2r 1) 2 i<j (λ i λ j ) r i=2 λ 1 2 i (1 λ i ) 1 2 (n 2r 1) dλ i = f(λ 1 ) c n,r λ 1 2 (2r 1) 1 1 (1 λ 1 ) 1 2 (n 2r+1) 1. c n 2,r 1
52 Overview of proof 2 R = f(λ 1 ) c n,r λ r (1 λ 1 ) 1 2 (n 2r 1) 2 i<j (λ i λ j ) r i=2 λ 1 2 i (1 λ i ) 1 2 (n 2r 1) dλ i = f(λ 1 ) c n,r λ 1 2 (2r 1) 1 1 (1 λ 1 ) 1 2 (n 2r+1) 1. c n 2,r 1 This a multiple of a univariate beta pdf.
53 Overview of proof 3 = F(λ 1 ) π B[ 1 (2r 1), 1 (n 2r +1)] 2 2 [ ( B r, )] n r [ 1 I 1 λ1 2 (n 2r+1), 1 ] 2 (2r 1), where the regularized incomplete beta function (RIBF)I x (p,q) is I x (p,q) := 1 B(p, q) x 0 t p 1 (1 t) q 1 dt.
54 Overview of proof 3 = F(λ 1 ) π B[ 1 (2r 1), 1 (n 2r +1)] 2 2 [ ( B r, )] n r [ 1 I 1 λ1 2 (n 2r+1), 1 ] 2 (2r 1), where the regularized incomplete beta function (RIBF)I x (p,q) is I x (p,q) := 1 B(p, q) x 0 t p 1 (1 t) q 1 dt. Proportional-dimensional asymptotic for RIBF: (T 2012) Let0 < x < 1,p/(p+q) > x,p/(p+q) ρ. Then [ ( ρ ρln +(1 ρ)ln x) lim p 1 p+q lni x(p,q) = ( 1 ρ 1 x )].
55 Overview of proof 4 Write λ 1 = aβ...
56 Overview of proof 4 Write λ 1 = aβ... lim n ( ) 1 1 2β n ln F(aβ) βlna+ ln(1 aβ) (1 β)ln(1 β) 2 }{{} Ψ(a,β)
57 Overview of proof 4 Write λ 1 = aβ... lim n ( ) 1 1 2β n ln F(aβ) βlna+ ln(1 aβ) (1 β)ln(1 β) 2 }{{} Ψ(a,β) F(aβ) C e n Ψ(a,β).
58 Overview of proof 4 Write λ 1 = aβ... lim n ( ) 1 1 2β n ln F(aβ) βlna+ ln(1 aβ) (1 β)ln(1 β) 2 }{{} Ψ(a,β) F(aβ) C e n Ψ(a,β). Ifâ(β) solves Ψ(a,β) = 0, P(λ 1 > â(β) β +ǫ) is exponentially small.
59 Overview of proof 4 Write λ 1 = aβ... lim n ( ) 1 1 2β n ln F(aβ) βlna+ ln(1 aβ) (1 β)ln(1 β) 2 }{{} Ψ(a,β) F(aβ) C e n Ψ(a,β). Ifâ(β) solves Ψ(a,β) = 0, P(λ 1 > â(β) β +ǫ) is exponentially small. Union bounding over ( m 2) matricesa i A j, P([µ(A)] 2 > â(β) β +ǫ) 0.
60 Summary Optimal behaviour is essentially µ r n
61 Summary Optimal behaviour is essentially µ r n Our constructions give the best known number of blocks: (n/r) 2
62 Summary Optimal behaviour is essentially µ r n Our constructions give the best known number of blocks: (n/r) 2 Random subspaces also have µ r n asymptotically.
63 References Equi-isoclinic subspaces of Euclidean space; Lemmens, P. and Seidel, J. (Nederlandse Akademie van Wetenschappen, 1973) Packing lines, planes, etc.: packing in Grassmannian spaces; Conway, J., Hardin, R. and Sloane, N. (Experimental Mathematics, 1996) On the largest principal angle between random subspaces; Absil, P-S., Edelman, A. and Koev, P. (Linear Algebra and its Applications, 2006) Group model selection using marginal correlations: the good, the bad and the ugly; Bajwa, W.and Mixon, D. (Allerton Conference on Communication, Control and Computing, 2012) Random fusion frames are nearly equiangular and tight; Bodmann, B.(Linear Algebra and its Applications, 2013) On block coherence of frames; CTX (to appear, Applied & Computational Harmonic Analysis, 2014)
64 One-step group thresholding (OSGT) 1: Inputs: Measurement matrix ] A = [A 1 A 2 A m C n mr, 2: measurement vector y C n, group sparsity k. ] T [ ] T [f 1 f 2 f m A 1 A 2 A m y 3: ( I,{ f (j) 2 } ) SORT({ f i 2 })
65 Recovery result for OSGT Theorem: There exist positive constants c k and c ν such that, if c k rk < n, µ(a) c µ logm and rlogm ν(a) c ν µ(a), n the output of the one-step group thresholding algorithm satisfies FDP(ˆK) 1 L k ; NDP(ˆK) 1 L k, for an integer L which depends upon c µ and the l 2 -norms { x i 2 }.
SPARSE NEAR-EQUIANGULAR TIGHT FRAMES WITH APPLICATIONS IN FULL DUPLEX WIRELESS COMMUNICATION
SPARSE NEAR-EQUIANGULAR TIGHT FRAMES WITH APPLICATIONS IN FULL DUPLEX WIRELESS COMMUNICATION A. Thompson Mathematical Institute University of Oxford Oxford, United Kingdom R. Calderbank Department of ECE
More informationSparse near-equiangular tight frames with applications in full duplex wireless communication
Sparse near-equiangular tight frames with applications in full duplex wireless communication Andrew Thompson (University of Oxford) with Robert Calderbank (Duke University) Global Conference on Signal
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationAnalysis of Denoising by Sparse Approximation with Random Frame Asymptotics
Analysis of Denoising by Sparse Approximation with Random Frame Asymptotics Alyson K Fletcher Univ of California, Berkeley alyson@eecsberkeleyedu Sundeep Rangan Flarion Technologies srangan@flarioncom
More informationdistances between objects of different dimensions
distances between objects of different dimensions Lek-Heng Lim University of Chicago joint work with: Ke Ye (CAS) and Rodolphe Sepulchre (Cambridge) thanks: DARPA D15AP00109, NSF DMS-1209136, NSF IIS-1546413,
More informationCONSTRUCTING PACKINGS IN PROJECTIVE SPACES AND GRASSMANNIAN SPACES VIA ALTERNATING PROJECTION
CONSTRUCTING PACKINGS IN PROJECTIVE SPACES AND GRASSMANNIAN SPACES VIA ALTERNATING PROJECTION JOEL A. TROPP Abstract. This report presents a numerical method for finding good packings on spheres, in projective
More informationInverse Eigenvalue Problems in Wireless Communications
Inverse Eigenvalue Problems in Wireless Communications Inderjit S. Dhillon Robert W. Heath Jr. Mátyás Sustik Joel A. Tropp The University of Texas at Austin Thomas Strohmer The University of California
More informationNew Wavelet Coefficient Raster Scannings for Deterministic Compressive Imaging
New Wavelet Coefficient Raster Scannings for Deterministic Compressive Imaging Marco F. Duarte Joint work with Sina Jafarpour and Robert Calderbank Compressive Imaging Architectures Φ f linear measurements
More informationLOCAL AND GLOBAL STABILITY OF FUSION FRAMES
LOCAL AND GLOBAL STABILITY OF FUSION FRAMES Jerry Emidih Norbert Wiener Center Department of Mathematics University of Maryland, College Park November 22 2016 OUTLINE 1 INTRO 2 3 4 5 OUTLINE 1 INTRO 2
More informationON BEAMFORMING WITH FINITE RATE FEEDBACK IN MULTIPLE ANTENNA SYSTEMS
ON BEAMFORMING WITH FINITE RATE FEEDBACK IN MULTIPLE ANTENNA SYSTEMS KRISHNA KIRAN MUKKAVILLI ASHUTOSH SABHARWAL ELZA ERKIP BEHNAAM AAZHANG Abstract In this paper, we study a multiple antenna system where
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationInformation-Theoretic Limits of Matrix Completion
Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationFrame Coherence and Sparse Signal Processing
Frame Coherence and Sparse Signal Processing Dustin G ixon Waheed U Bajwa Robert Calderbank Program in Applied and Computational athematics Princeton University Princeton ew Jersey 08544 Department of
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationMULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran
MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING Kaitlyn Beaudet and Douglas Cochran School of Electrical, Computer and Energy Engineering Arizona State University, Tempe AZ 85287-576 USA ABSTRACT The problem
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationRobust dimension reduction, fusion frames, and Grassmannian packings
Appl. Comput. Harmon. Anal. 6 (009) 64 76 www.elsevier.com/locate/acha Robust dimension reduction, fusion frames, and Grassmannian packings Gitta Kutyniok a,,1, Ali Pezeshki b,, Robert Calderbank b,, Taotao
More informationRandom Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg
Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles
More informationLow Complexity MIMO Precoding Codebooks from Orthoplex Packings
Low Complexity MIMO Precoding Codebooks from Orthoplex Packings Renaud-lexandre Pitaval, Olav Tirkkonen andstevend.lostein alto University, Department of Communications and Networking, Espoo, Finland Queens
More informationFinite Frames for Sparse Signal Processing
Finite Frames for Sparse Signal Processing Waheed U. Bajwa and Ali Pezeshki Abstract Over the last decade, considerable progress has been made towards developing new signal processing methods to manage
More informationRecovery Guarantees for Rank Aware Pursuits
BLANCHARD AND DAVIES: RECOVERY GUARANTEES FOR RANK AWARE PURSUITS 1 Recovery Guarantees for Rank Aware Pursuits Jeffrey D. Blanchard and Mike E. Davies Abstract This paper considers sufficient conditions
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationCodes and Rings: Theory and Practice
Codes and Rings: Theory and Practice Patrick Solé CNRS/LAGA Paris, France, January 2017 Geometry of codes : the music of spheres R = a finite ring with identity. A linear code of length n over a ring R
More informationDesigning Structured Tight Frames via an Alternating Projection Method
1 Designing Structured Tight Frames via an Alternating Projection Method Joel A Tropp, Student Member, IEEE, Inderjit S Dhillon, Member, IEEE, Robert W Heath Jr, Member, IEEE and Thomas Strohmer Abstract
More informationFrom Bernstein approximation to Zauner s conjecture
From Bernstein approximation to Zauner s conjecture Shayne Waldron Mathematics Department, University of Auckland December 5, 2017 Shayne Waldron (University of Auckland) Workshop on Spline Approximation
More informationCONSTRUCTING PACKINGS IN GRASSMANNIAN MANIFOLDS VIA ALTERNATING PROJECTION
CONSTRUCTING PACKINGS IN GRASSMANNIAN MANIFOLDS VIA ALTERNATING PROJECTION J. A. TROPP, I. S. DHILLON, R. W. HEATH JR., AND T. STROHMER Abstract. This paper contains a numerical method for finding good
More informationMIMO Capacities : Eigenvalue Computation through Representation Theory
MIMO Capacities : Eigenvalue Computation through Representation Theory Jayanta Kumar Pal, Donald Richards SAMSI Multivariate distributions working group Outline 1 Introduction 2 MIMO working model 3 Eigenvalue
More informationOn Sparsity, Redundancy and Quality of Frame Representations
On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationThe Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology
The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationMET Workshop: Exercises
MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When
More informationAdaptive Algorithms for Blind Source Separation
Adaptive Algorithms for Blind Source Separation George Moustakides Department of Computer Engineering and Informatics UNIVERSITY OF PATRAS, GREECE Outline of the Presentation Problem definition Existing
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV
More informationQALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra.
QALGO workshop, Riga. 1 / 26 Quantum algorithms for linear algebra., Center for Quantum Technologies and Nanyang Technological University, Singapore. September 22, 2015 QALGO workshop, Riga. 2 / 26 Overview
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationDistances between spectral densities. V x. V y. Genesis of this talk. The key point : the value of chordal distances. Spectrum approximation problem
Genesis of this talk Distances between spectral densities The shortest distance between two points is always under construction. (R. McClanahan) R. Sepulchre -- University of Cambridge Celebrating Anders
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationLecture 13 October 6, Covering Numbers and Maurey s Empirical Method
CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and
More informationResearch Statement. Edward Richmond. October 13, 2012
Research Statement Edward Richmond October 13, 2012 Introduction My mathematical interests include algebraic combinatorics, algebraic geometry and Lie theory. In particular, I study Schubert calculus,
More informationRobust multichannel sparse recovery
Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationReal Equiangular Frames
Peter G Casazza Department of Mathematics The University of Missouri Columbia Missouri 65 400 Email: pete@mathmissouriedu Real Equiangular Frames (Invited Paper) Dan Redmond Department of Mathematics The
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More information2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.
2. LINEAR ALGEBRA Outline 1. Definitions 2. Linear least squares problem 3. QR factorization 4. Singular value decomposition (SVD) 5. Pseudo-inverse 6. Eigenvalue decomposition (EVD) 1 Definitions Vector
More informationSequential Low-Rank Change Detection
Sequential Low-Rank Change Detection Yao Xie H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology, GA Email: yao.xie@isye.gatech.edu Lee Seversky Air Force Research
More informationBayes spaces: use of improper priors and distances between densities
Bayes spaces: use of improper priors and distances between densities J. J. Egozcue 1, V. Pawlowsky-Glahn 2, R. Tolosana-Delgado 1, M. I. Ortego 1 and G. van den Boogaart 3 1 Universidad Politécnica de
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More information6 Compressed Sensing and Sparse Recovery
6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value
More informationA Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia
A Tutorial on Compressive Sensing Simon Foucart Drexel University / University of Georgia CIMPA13 New Trends in Applied Harmonic Analysis Mar del Plata, Argentina, 5-16 August 2013 This minicourse acts
More informationCORRELATION MINIMIZING FRAMES
CORRELATIO MIIMIZIG FRAMES A Dissertation Presented to the Faculty of the Department of Mathematics University of Houston In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationCS60021: Scalable Data Mining. Dimensionality Reduction
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a
More informationLecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.
Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. Definition A set of k vectors {u, u 2,..., u k }, where each u i R n, is said to be an
More informationBeyond incoherence and beyond sparsity: compressed sensing in the real world
Beyond incoherence and beyond sparsity: compressed sensing in the real world Clarice Poon 1st November 2013 University of Cambridge, UK Applied Functional and Harmonic Analysis Group Head of Group Anders
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationPreconditioning techniques in frame theory and probabilistic frames
Preconditioning techniques in frame theory and probabilistic frames Department of Mathematics & Norbert Wiener Center University of Maryland, College Park AMS Short Course on Finite Frame Theory: A Complete
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationEconomics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010
Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n
More informationMassive MIMO: Signal Structure, Efficient Processing, and Open Problems II
Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Mahdi Barzegar Communications and Information Theory Group (CommIT) Technische Universität Berlin Heisenberg Communications and
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationSparse Approximation, Denoising, and Large Random Frames
Sparse Approximation, Denoising, and Large Random Frames Alyson K. Fletcher *, Sundeep Rangan and Vivek K Goyal alyson@eecs.berkeley.edu, s.rangan@flarion.com, vgoyal@mit.edu * University of California,
More informationInfo-Greedy Sequential Adaptive Compressed Sensing
Info-Greedy Sequential Adaptive Compressed Sensing Yao Xie Joint work with Gabor Braun and Sebastian Pokutta Georgia Institute of Technology Presented at Allerton Conference 2014 Information sensing for
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationCambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information
Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationNon-coherent Multi-layer Constellations for Unequal Error Protection
Non-coherent Multi-layer Constellations for Unequal Error Protection Kareem M. Attiah, Karim Seddik, Ramy H. Gohary and Halim Yanikomeroglu Department of Electrical Engineering, Alexandria University,
More informationSparse analysis Lecture III: Dictionary geometry and greedy algorithms
Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationNOTES ON BILINEAR FORMS
NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear
More informationNotation. General. Notation Description See. Sets, Functions, and Spaces. a b & a b The minimum and the maximum of a and b
Notation General Notation Description See a b & a b The minimum and the maximum of a and b a + & a f S u The non-negative part, a 0, and non-positive part, (a 0) of a R The restriction of the function
More informationSpectral Processing. Misha Kazhdan
Spectral Processing Misha Kazhdan [Taubin, 1995] A Signal Processing Approach to Fair Surface Design [Desbrun, et al., 1999] Implicit Fairing of Arbitrary Meshes [Vallet and Levy, 2008] Spectral Geometry
More informationTighter Low-rank Approximation via Sampling the Leveraged Element
Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com
More informationCompressive Sensing with Random Matrices
Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview
More informationCell throughput analysis of the Proportional Fair scheduler in the single cell environment
Cell throughput analysis of the Proportional Fair scheduler in the single cell environment Jin-Ghoo Choi and Seawoong Bahk IEEE Trans on Vehicular Tech, Mar 2007 *** Presented by: Anh H. Nguyen February
More informationGatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II
Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Unit University College London 27 Feb 2017 Outline Part I: Theory of ICA Definition and difference
More informationExact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations
Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the
More informationSubspace Classifiers. Robert M. Haralick. Computer Science, Graduate Center City University of New York
Subspace Classifiers Robert M. Haralick Computer Science, Graduate Center City University of New York Outline The Gaussian Classifier When Σ 1 = Σ 2 and P(c 1 ) = P(c 2 ), then assign vector x to class
More informationGrassmannian Beamforming for Multiple-Input Multiple-Output Wireless Systems
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 0, OCTOBER 003 735 Grassmannian Beamforming for Multiple-Input Multiple-Output Wireless Systems David J. Love, Student Member, IEEE, Robert W. Heath,
More informationAnalysis Preliminary Exam Workshop: Hilbert Spaces
Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationOn the coherence barrier and analogue problems in compressed sensing
On the coherence barrier and analogue problems in compressed sensing Clarice Poon University of Cambridge June 1, 2017 Joint work with: Ben Adcock Anders Hansen Bogdan Roman (Simon Fraser) (Cambridge)
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationList Decoding of Noisy Reed-Muller-like Codes
List Decoding of Noisy Reed-Muller-like Codes Martin J. Strauss University of Michigan Joint work with A. Robert Calderbank (Princeton) Anna C. Gilbert (Michigan) Joel Lepak (Michigan) Euclidean List Decoding
More informationRobust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds
Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.
More informationOn the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals
On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals Monica Fira, Liviu Goras Institute of Computer Science Romanian Academy Iasi, Romania Liviu Goras, Nicolae Cleju,
More informationProf. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides
Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX
More information