Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II

Similar documents
Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Low Resolution Adaptive Compressed Sensing for mmwave MIMO receivers

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna

FDD Massive MIMO via UL/DL Channel Covariance Extrapolation and Active Channel Sparsification

Multiuser Capacity in Block Fading Channel

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

THE estimation of covariance matrices is a crucial component

Self-Calibration and Biconvex Compressive Sensing

Optimal Data and Training Symbol Ratio for Communication over Uncertain Channels

How Much Training and Feedback are Needed in MIMO Broadcast Channels?

Parameter Estimation for Mixture Models via Convex Optimization

The Optimality of Beamforming: A Unified View

Multiple Antennas in Wireless Communications

Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming

Research Overview. Kristjan Greenewald. February 2, University of Michigan - Ann Arbor

Lecture 7 MIMO Communica2ons

Hybrid Pilot/Quantization based Feedback in Multi-Antenna TDD Systems

Riemannian Coding for Covariance Interpolation in Massive MIMO Frequency Division Duplex Systems

A Signal-Space Analysis of Spatial Self-Interference Isolation for Full-Duplex Wireless

Hybrid Pilot/Quantization based Feedback in Multi-Antenna TDD Systems

Random Matrices and Wireless Communications

CSI Overhead Reduction with Stochastic Beamforming for Cloud Radio Access Networks

PERFORMANCE COMPARISON OF DATA-SHARING AND COMPRESSION STRATEGIES FOR CLOUD RADIO ACCESS NETWORKS. Pratik Patil, Binbin Dai, and Wei Yu

Joint Spatial Division and Multiplexing

Achievable Outage Rate Regions for the MISO Interference Channel

Lecture 6: Modeling of MIMO Channels Theoretical Foundations of Wireless Communications 1

Channel Estimation with Low-Precision Analog-to-Digital Conversion

Lecture 6: Modeling of MIMO Channels Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH

Nash Bargaining in Beamforming Games with Quantized CSI in Two-user Interference Channels

On the Throughput of Proportional Fair Scheduling with Opportunistic Beamforming for Continuous Fading States

Multiple Antennas. Mats Bengtsson, Björn Ottersten. Channel characterization and modeling 1 September 8, Signal KTH Research Focus

arxiv: v1 [cs.it] 21 Feb 2013

Improved channel estimation for massive MIMO systems using hybrid pilots with pilot anchoring

Massive MIMO As Enabler for Communications with Drone Swarms

ADAPTIVE ANTENNAS. SPATIAL BF

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array

Hybrid Analog-Digital Channel Estimation and. Beamforming: Training-Throughput Tradeoff. (Draft with more Results and Details)

Exploiting Partial Channel Knowledge at the Transmitter in MISO and MIMO Wireless

Covariance Matrix Estimation in Massive MIMO

Multiple-Input Multiple-Output Systems

Gauge optimization and duality

CHANNEL FEEDBACK QUANTIZATION METHODS FOR MISO AND MIMO SYSTEMS

ELG7177: MIMO Comunications. Lecture 8

Joint FEC Encoder and Linear Precoder Design for MIMO Systems with Antenna Correlation

Degrees-of-Freedom Robust Transmission for the K-user Distributed Broadcast Channel

SPARSE signal representations have gained popularity in recent

Massive MU-MIMO Downlink TDD Systems with Linear Precoding and Downlink Pilots

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH

I. Introduction. Index Terms Multiuser MIMO, feedback, precoding, beamforming, codebook, quantization, OFDM, OFDMA.

On the Capacity of MIMO Rician Broadcast Channels

Support recovery in compressive sensing for estimation of Direction-Of-Arrival

Massive MIMO for Maximum Spectral Efficiency Mérouane Debbah

Ergodic and Outage Capacity of Narrowband MIMO Gaussian Channels

Performance Analysis for Sparse Support Recovery

Minimum Mean Squared Error Interference Alignment

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Transmitter-Receiver Cooperative Sensing in MIMO Cognitive Network with Limited Feedback

Using Noncoherent Modulation for Training

sparse and low-rank tensor recovery Cubic-Sketching

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Random Access Protocols for Massive MIMO

THE IC-BASED DETECTION ALGORITHM IN THE UPLINK LARGE-SCALE MIMO SYSTEM. Received November 2016; revised March 2017

NOMA: Principles and Recent Results

Codebook Design for Channel Feedback in Lens-Based Millimeter-Wave Massive MIMO Systems

RECURSIVE TRAINING WITH UNITARY MODULATION FOR CORRELATED BLOCK-FADING MIMO CHANNELS

926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH Monica Nicoli, Member, IEEE, and Umberto Spagnolini, Senior Member, IEEE (1)

ELEC E7210: Communication Theory. Lecture 10: MIMO systems

Limited Feedback Hybrid Precoding for. Multi-User Millimeter Wave Systems

Peak Detection for Images

Lecture Notes 1: Vector spaces

EE 5407 Part II: Spatial Based Wireless Communications

Lattices and Lattice Codes

EUSIPCO

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 64, NO. 2, FEBRUARY

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Vector Channel Capacity with Quantized Feedback

Upper Bounds on MIMO Channel Capacity with Channel Frobenius Norm Constraints

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Two-Stage Channel Feedback for Beamforming and Scheduling in Network MIMO Systems

Blind MIMO communication based on Subspace Estimation

Capacity of Block Rayleigh Fading Channels Without CSI

Robust multichannel sparse recovery

arxiv: v2 [cs.it] 24 Feb 2017

On the Capacity of Distributed Antenna Systems Lin Dai

On the Required Accuracy of Transmitter Channel State Information in Multiple Antenna Broadcast Channels

Estimation of Performance Loss Due to Delay in Channel Feedback in MIMO Systems

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Stopping Condition for Greedy Block Sparse Signal Recovery

USING multiple antennas has been shown to increase the

Communications over the Best Singular Mode of a Reciprocal MIMO Channel

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Exact Joint Sparse Frequency Recovery via Optimization Methods

Harnessing Interaction in Bursty Interference Networks

CSC 576: Variants of Sparse Learning

Limited Feedback in Wireless Communication Systems

Optimal Transmit Strategies in MIMO Ricean Channels with MMSE Receiver

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Transcription:

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Mahdi Barzegar Communications and Information Theory Group (CommIT) Technische Universität Berlin Heisenberg Communications and Information Theory Group Freie Universität Berlin CoSIP Retreat Berlin, December 2016 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 1 / 36

Outline 1 Overview 2 One-Shot Channel Vector Estimation 3 Subspace Estimation: Exploiting Spatial Sparsity 4 Exploiting Spatio-Temporal Sparsity 5 Instances of Other Interesting Problems M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 2 / 36

Outline 1 Overview 2 One-Shot Channel Vector Estimation 3 Subspace Estimation: Exploiting Spatial Sparsity 4 Exploiting Spatio-Temporal Sparsity 5 Instances of Other Interesting Problems M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 3 / 36

Overview The starting point h s (f N ) h s (f 1 ) f = 1 T s... Bandwidth W Focusing on the channel vector in each time-frequency resource block T s M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 4 / 36

Massive MIMO Regime Isotropic channel model for low-resolution antennas, or rich scattering channel 1, 2 h CN (0, σ 2 I M M ) 1 Emre Telatar. Capacity of Multi-antenna Gaussian Channels. In: European transactions on telecommunications 10.6 (1999), pp. 585 595. 2 Lizhong Zheng and David NC Tse. Communication on the Grassmann manifold: A geometric approach to the noncoherent multiple-antenna channel. In: Information Theory, IEEE Transactions on 48.2 (2002), pp. 359 383. M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 5 / 36

Massive MIMO Regime Isotropic channel model for low-resolution antennas, or rich scattering channel 1, 2 h CN (0, σ 2 I M M ) In massive-mimo the base-station sees through the environment p scatterers, p M (M 1)d 4d 3d 2d d 0 θ i. User 1 Emre Telatar. Capacity of Multi-antenna Gaussian Channels. In: European transactions on telecommunications 10.6 (1999), pp. 585 595. 2 Lizhong Zheng and David NC Tse. Communication on the Grassmann manifold: A geometric approach to the noncoherent multiple-antenna channel. In: Information Theory, IEEE Transactions on 48.2 (2002), pp. 359 383. M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 5 / 36

Massive MIMO Regime Isotropic channel model for low-resolution antennas, or rich scattering channel 1, 2 h CN (0, σ 2 I M M ) In massive-mimo the base-station sees through the environment This implies h CN (0, C), with rank (C) p M p scatterers, p M (M 1)d 4d 3d 2d d 0 θ i. User 1 Emre Telatar. Capacity of Multi-antenna Gaussian Channels. In: European transactions on telecommunications 10.6 (1999), pp. 585 595. 2 Lizhong Zheng and David NC Tse. Communication on the Grassmann manifold: A geometric approach to the noncoherent multiple-antenna channel. In: Information Theory, IEEE Transactions on 48.2 (2002), pp. 359 383. M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 5 / 36

Outline 1 Overview 2 One-Shot Channel Vector Estimation 3 Subspace Estimation: Exploiting Spatial Sparsity 4 Exploiting Spatio-Temporal Sparsity 5 Instances of Other Interesting Problems M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 6 / 36

Sparse Channel Model for Massive MIMO Instantaneous sparse channel vector h = p i=1 w ia(θ i ), where a(θ) C M is the array response with [a(θ)] k = e jkπ sin(θ) sin(θmax) p scatterers, p M (M 1)d 4d 3d 2d d 0 θ i. User M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 7 / 36

Sparse Channel Model for Massive MIMO Instantaneous sparse channel vector h = p i=1 w ia(θ i ), where a(θ) C M is the array response with [a(θ)] k = e jkπ sin(θ) sin(θmax) Compressed sensing methods are used for channel estimation M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 7 / 36

Sparse Channel Model for Massive MIMO Instantaneous sparse channel vector h = p i=1 w ia(θ i ), where a(θ) C M is the array response with [a(θ)] k = e jkπ sin(θ) sin(θmax) Compressed sensing methods are used for channel estimation Selection of only a few array elements is sufficient to recover h 4d 3d 2d d 0 Antenna Selection 4d 3d 2d d 0 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 7 / 36

High-dimensional Estimation with Sparsity Constraints High-dim Data Low-dim Structure Low-dim Sketches (Antenna Selection) Sparse Channel Vector h = p i=1 w ia(θ i ) Efficient Implementation (less RF-chains, less power consumption,...) Compressed Sensing Techniques Beamformer Design M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 8 / 36

Outline 1 Overview 2 One-Shot Channel Vector Estimation 3 Subspace Estimation: Exploiting Spatial Sparsity 4 Exploiting Spatio-Temporal Sparsity 5 Instances of Other Interesting Problems M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 9 / 36

Subspace Estimation One can go beyond one-shot CS and think of subspace estimation. Observe the channel over time M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 10 / 36

Subspace Estimation One can go beyond one-shot CS and think of subspace estimation. Observe the channel over time Questions Q1: How to estimate the underlying subspace? What are the constraints? M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 10 / 36

Subspace Estimation One can go beyond one-shot CS and think of subspace estimation. Observe the channel over time Questions Q1: How to estimate the underlying subspace? What are the constraints? Q2: Is subspace information particularly useful? M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 10 / 36

Q1: How to estimate the subspace? Some Observations: The local geometry given by {(σi 2, θ i)} p i=1 is quasi-stationary with sharp transitions M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 11 / 36

Q1: How to estimate the subspace? Some Observations: The local geometry given by {(σi 2, θ i)} p i=1 is quasi-stationary with sharp transitions In the traditional periodic training scheme, we have t 1000 10000 training samples: y i = s i h i + n i x i = By i, i [t] s 0 s 1 s t 1 s t τ τ T M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 11 / 36

Q1: How to estimate the subspace? Some Observations: There is an intermediate regime τ T T containing 1 ν 10000 training samples, in which the subspace information can be exploited s 0 s 1 s t 1 s t τ τ T s 0 s 1 s ν 1 s ν τ τ T T M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 12 / 36

Q1: How to estimate the subspace? Some Observations: There is an intermediate regime τ T T containing 1 ν 10000 training samples, in which the subspace information can be exploited s 0 s 1 s ν 1 s ν τ τ T T M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 12 / 36

Q1: How to estimate the subspace? Some Observations: There is an intermediate regime τ T T containing 1 ν 10000 training samples, in which the subspace information can be exploited s 0 s 1 s ν 1 s ν τ τ T T Typical Window Size Subspace Estimation should be done with around ν 50 500 samples! M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 12 / 36

Q2: How to exploit the subspace information? Example 1 If signal subspace U M q is known, then h span(u) approximately: this can be used to improve estimation of h h M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 13 / 36

Q2: How to exploit the subspace information? Example 1 If signal subspace U M q is known, then h span(u) approximately: this can be used to improve estimation of h h This can be seen as a support estimate for the sparse recovery problem M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 13 / 36

Q2: How to exploit the subspace information? Example 2 This solves the aging problem in mm-wave channels s i s i+1 τ r h (δ) 1 τ δ M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 14 / 36

Q2: How to exploit the subspace information? Example 2 This solves the aging problem in mm-wave channels Reject the interference by zero-forcing to signal subspace rather than zero-forcing to instantaneous channel vector s i s i+1 τ h i 2 h i r h (δ) 1 h i 1 τ δ M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 14 / 36

How to exploit the subspace information? Example 3 This can be used to cluster the user based on their signal subspace, and is suitable for HDA implementations 3 (M 1)d 4d Digital Base-band Processing ADC (uplink) DAC (downlink) M M Analog RF-Chain 3d 2d d 0 3 Ansuman Adhikary et al. Joint spatial division and multiplexing for mm-wave channels. In: IEEE J. on Sel. Areas on Commun. (JSAC) 32.6 (2014), pp. 12391255. M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 15 / 36

How to exploit the subspace information? Example 3 This can be used to cluster the user based on their signal subspace, and is suitable for HDA implementations 3 (M 1)d 4d Digital Base-band Processing ADC (uplink) DAC (downlink) M M Analog RF-Chain 3d 2d d 0 Analog Beamforming Subspace Information Digital Beamforming Instantaneous Channel Information 3 Ansuman Adhikary et al. Joint spatial division and multiplexing for mm-wave channels. In: IEEE J. on Sel. Areas on Commun. (JSAC) 32.6 (2014), pp. 12391255. M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 15 / 36

Algorithm Design Signal Model Process {h i } ν i=1 with a local geometry given by C h = p i=1 σ2 i a(θ i)a(θ i ) H, and unknown time-variation M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 16 / 36

Algorithm Design Signal Model Process {h i } ν i=1 with a local geometry given by C h = p i=1 σ2 i a(θ i)a(θ i ) H, and unknown time-variation Noisy observations y i = h i + n i M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 16 / 36

Algorithm Design Signal Model Process {h i } ν i=1 with a local geometry given by C h = p i=1 σ2 i a(θ i)a(θ i ) H, and unknown time-variation Noisy observations y i = h i + n i Available data size ν 50 500 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 16 / 36

Algorithm Design Signal Model Process {h i } ν i=1 with a local geometry given by C h = p i=1 σ2 i a(θ i)a(θ i ) H, and unknown time-variation Noisy observations y i = h i + n i Available data size ν 50 500 Input Data Low-dim sketches x i = By i with B m M typically antenna selection M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 16 / 36

Algorithm Design Signal Model Process {h i } ν i=1 with a local geometry given by C h = p i=1 σ2 i a(θ i)a(θ i ) H, and unknown time-variation Noisy observations y i = h i + n i Available data size ν 50 500 Input Data Low-dim sketches x i = By i with B m M typically antenna selection Objective Design a robust algorithm for estimating the signal subspace with ν 50 500 noisy sketches, and by exploiting spatial sparsity ɛ-efficiency criterion ν i=1 P U(h i ) 2 (1 ɛ) ν i=1 h i 2 for the q-dim signal subspace given by U M q such that U H U = I q q M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 16 / 36

Multiple Measurement Vector (MMV) Observations 0000... 0... 0000... 0... H = [h 1, h 2,..., h ν] = [a(θ 1), a(θ 2)... a(θ G )]... := A M G Γ G ν 0000... 0............... 0000... 0 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 17 / 36

Multiple Measurement Vector (MMV) Observations 0000... 0... 0000... 0... H = [h 1, h 2,..., h ν] = [a(θ 1), a(θ 2)... a(θ G )]... := A M G Γ G ν 0000... 0............... 0000... 0 (M 1)d 4d 3d 2d d 0 θ i. User Only p out of G grid elements are active M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 17 / 36

Atomic-norm Denoising for MMV 4, 5, 6 Atomic-norm can be used to estimate the channel vectors Atomic-norm Regularizer for MMV Underlying dictionary D = {a(θ)γ H : θ [ θ max, θ max ], γ C ν } 4 Venkat Chandrasekaran et al. The convex geometry of linear inverse problems. In: Foundations of Computational mathematics 12.6 (2012), pp. 805 849. 5 Badri Narayan Bhaskar, Gongguo Tang, and Benjamin Recht. Atomic norm denoising with applications to line spectral estimation. In: Signal Processing, IEEE Transactions on 61.23 (2013), pp. 5987 5999. 6 Yuanxin Li and Yuejie Chi. Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors. In: arxiv preprint arxiv:1408.2242 (2014). M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 18 / 36

Atomic-norm Denoising for MMV 4, 5, 6 Atomic-norm can be used to estimate the channel vectors Atomic-norm Regularizer for MMV Underlying dictionary D = {a(θ)γ H : θ [ θ max, θ max ], γ C ν } Spatial-sparsity promoting, channel-variation ignoring regularizer for the channel vectors H M ν : { } H D = inf λi : λ i > 0, θ i, γ i s.t. λi a(θ i )γ H i = H 4 Venkat Chandrasekaran et al. The convex geometry of linear inverse problems. In: Foundations of Computational mathematics 12.6 (2012), pp. 805 849. 5 Badri Narayan Bhaskar, Gongguo Tang, and Benjamin Recht. Atomic norm denoising with applications to line spectral estimation. In: Signal Processing, IEEE Transactions on 61.23 (2013), pp. 5987 5999. 6 Yuanxin Li and Yuejie Chi. Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors. In: arxiv preprint arxiv:1408.2242 (2014). M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 18 / 36

Atomic-norm Denoising for MMV Atomic-norm can be used to estimate the channel vectors Atomic-norm Regularizer for MMV Underlying dictionary D = {a(θ)γ H : θ [ θ max, θ max ], γ C ν } Spatial-sparsity promoting, channel-variation ignoring regularizer for the channel vectors H M ν : { } H D = inf λi : λ i > 0, θ i, γ i s.t. λi a(θ i )γ H i = H Atomic-norm Denoising for MMV We have the matrix of sketches X = [x 1, x 2,..., x ν ], where x i = B(h i + n i ) M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 18 / 36

Atomic-norm Denoising for MMV Atomic-norm can be used to estimate the channel vectors Atomic-norm Regularizer for MMV Underlying dictionary D = {a(θ)γ H : θ [ θ max, θ max ], γ C ν } Spatial-sparsity promoting, channel-variation ignoring regularizer for the channel vectors H M ν : { } H D = inf λi : λ i > 0, θ i, γ i s.t. λi a(θ i )γ H i = H Atomic-norm Denoising for MMV We have the matrix of sketches X = [x 1, x 2,..., x ν ], where x i = B(h i + n i ) The channel vectors are estimated via Ĥ = arg min H D s.t. X BH δ H M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 18 / 36

Outline 1 Overview 2 One-Shot Channel Vector Estimation 3 Subspace Estimation: Exploiting Spatial Sparsity 4 Exploiting Spatio-Temporal Sparsity 5 Instances of Other Interesting Problems M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 19 / 36

Spatio-Temporal Correlations and Sparse Scattering First Extreme: Fast-varying Channel Vectors Although channel vector is randomly varying with time its underlying subspace remains invariant h 1 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 20 / 36

Spatio-Temporal Correlations and Sparse Scattering First Extreme: Fast-varying Channel Vectors Although channel vector is randomly varying with time its underlying subspace remains invariant h 2 h 1 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 20 / 36

Spatio-Temporal Correlations and Sparse Scattering First Extreme: Fast-varying Channel Vectors Although channel vector is randomly varying with time its underlying subspace remains invariant h 2 h 1 h 3 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 20 / 36

Spatio-Temporal Correlations and Sparse Scattering First Extreme: Fast-varying Channel Vectors Although channel vector is randomly varying with time its underlying subspace remains invariant This subspace information depends on C h = p i=1 σ2 i a(θ i)a(θ i ) H, which encodes the local geometry of the user given by {(σi 2, θ i)} p i=1 h 2 h 1 h 3 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 20 / 36

Spatio-Temporal Correlations and Sparse Scattering Second Extreme: Slowly-varying Channel Vectors Now consider C h = p i=1 σ2 i a(θ i)a(θ i ) H, and suppose h i is slowly varying M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 21 / 36

Spatio-Temporal Correlations and Sparse Scattering Second Extreme: Slowly-varying Channel Vectors Now consider C h = p i=1 σ2 i a(θ i)a(θ i ) H, and suppose h i is slowly varying In the extreme case, we have h i = h M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 21 / 36

Spatio-Temporal Correlations and Sparse Scattering Second Extreme: Slowly-varying Channel Vectors Now consider C h = p i=1 σ2 i a(θ i)a(θ i ) H, and suppose h i is slowly varying In the extreme case, we have h i = h h 1 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 21 / 36

Spatio-Temporal Correlations and Sparse Scattering Second Extreme: Slowly-varying Channel Vectors Now consider C h = p i=1 σ2 i a(θ i)a(θ i ) H, and suppose h i is slowly varying In the extreme case, we have h i = h h 2 h 1 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 21 / 36

Spatio-Temporal Correlations and Sparse Scattering Second Extreme: Slowly-varying Channel Vectors Now consider C h = p i=1 σ2 i a(θ i)a(θ i ) H, and suppose h i is slowly varying In the extreme case, we have h i = h h 2 h 1 h 3 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 21 / 36

Spatio-Temporal Correlations and Sparse Scattering Observations For the process {h i } t i=1 with covariance matrix C h = p i=1 σ2 i a(θ i)a(θ i ) H, as far as the local geometry remains invariant: M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 22 / 36

Spatio-Temporal Correlations and Sparse Scattering Observations For the process {h i } t i=1 with covariance matrix C h = p i=1 σ2 i a(θ i)a(θ i ) H, as far as the local geometry remains invariant: In the fast-varying regime: the process lies on a p-dim subspace M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 22 / 36

Spatio-Temporal Correlations and Sparse Scattering Observations For the process {h i } t i=1 with covariance matrix C h = p i=1 σ2 i a(θ i)a(θ i ) H, as far as the local geometry remains invariant: In the fast-varying regime: the process lies on a p-dim subspace In the slowly-varying regime: the process lies on a 1-dim subspace M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 22 / 36

Spatio-Temporal Correlations and Sparse Scattering Observations For the process {h i } t i=1 with covariance matrix C h = p i=1 σ2 i a(θ i)a(θ i ) H, as far as the local geometry remains invariant: In the fast-varying regime: the process lies on a p-dim subspace In the slowly-varying regime: the process lies on a 1-dim subspace Depending on the channel dynamics, the effective dimension q is between 1 and p M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 22 / 36

Spatio-Temporal Correlations and Sparse Scattering Observations For the process {h i } t i=1 with covariance matrix C h = p i=1 σ2 i a(θ i)a(θ i ) H, as far as the local geometry remains invariant: In the fast-varying regime: the process lies on a p-dim subspace In the slowly-varying regime: the process lies on a 1-dim subspace Depending on the channel dynamics, the effective dimension q is between 1 and p There is a nice sparse subspace structure that can be exploited! M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 22 / 36

Spatio-Temporal Correlations and Sparse Scattering Observations Fast-varying Channel Γ G ν = 0000... 0 0000... 0 0000... 0.................. 0000... 0 Slowly-varying Channel Γ G ν = 0000... 0 0000... 0 0000... 0.................. 0000... 0 M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 23 / 36

Spatio-Temporal Correlations and Sparse Scattering Observations Fast-varying Channel 0000... 0 0000... 0 Γ G ν = 0000... 0............... 0000... 0 Slowly-varying Channel 0000... 0 0000... 0 Γ G ν = 0000... 0............... 0000... 0 Open Problem I What is a good MMV algorithm, which is indifferent towards the temporal behavior? M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 23 / 36

Notes We identified an underlying signal subspace sparsity that can be exploited on top of one-shot sparsity to boost the system performance M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 24 / 36

Notes We identified an underlying signal subspace sparsity that can be exploited on top of one-shot sparsity to boost the system performance Subspace estimation can be done with ν 50 number of samples for snr 0 10 db M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 24 / 36

Notes We identified an underlying signal subspace sparsity that can be exploited on top of one-shot sparsity to boost the system performance Subspace estimation can be done with ν 50 number of samples for snr 0 10 db Low-complexity algorithms exist that can exploit the spatial sparsity, are robust to channel dynamics, and extract the signal subspace quite efficiently M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 24 / 36

Outline 1 Overview 2 One-Shot Channel Vector Estimation 3 Subspace Estimation: Exploiting Spatial Sparsity 4 Exploiting Spatio-Temporal Sparsity 5 Instances of Other Interesting Problems M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 25 / 36

Antenna Configuration in Massive MIMO Figure: Some possible configurations of a Massive MIMO BS 7 7 Erik G Larsson et al. Massive MIMO for next generation wireless systems. In: IIEEE Communications Magazine 52.2 (2014), pp. 186195. M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 26 / 36

Antenna Configuration in Massive MIMO Question Q1: What is a suitable antenna arrangement? M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 27 / 36

Antenna Configuration in Massive MIMO Two Examples A Linear Array: A Toeplitz covariance matrix = [a( κ)] n = e jκy dy (n 1) = C aa H : M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 28 / 36

Antenna Configuration in Massive MIMO Two Examples A Rectangular Array: A Block-Toeplitz covariance matrix = [a( κ)] m,n j(κy dy (n 1)+κx dx (m 1)) = e = C aa H : M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 29 / 36

Bias-Variance Trade-off More Structure Efficient channel estimation algorithms Less variance! More bias! Less Structure More DoF Less bias! More variance! M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 30 / 36

Bias-Variance Trade-off More Structure Efficient channel estimation algorithms Less variance! More bias! Less Structure More DoF Less bias! More variance! Open Problem II What is the the best configuration? M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 30 / 36

FDD Channel Feedback Problem The user feeds back the downlink channel to BS (M 1)d 4d 3d 2d d 0 1 2 User M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 31 / 36

FDD Channel Feedback Problem M 1 feedback is too time-consuming Question Is there a way to estimate the downlink channel from the uplink channel? M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 32 / 36

FDD Channel Feedback Problem M 1 feedback is too time-consuming Question Is there a way to estimate the downlink channel from the uplink channel? General channel model (uplink) a f1 (θ) = h up = 1 p i=1 e j 2π c f1d sin(θ). e j 2π c (M 1)f1d sin(θ) w i a f1 (θ i ) + ρ(dθ)a f1 (θ) + n, w i CN (0, σi 2 ), n CN (0, σni 2 M ) M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 32 / 36

FDD Channel Feedback Problem M 1 feedback is too time-consuming Question Is there a way to estimate the downlink channel from the uplink channel? General channel model (uplink) a f1 (θ) = h up = 1 p i=1 e j 2π c f1d sin(θ). e j 2π c (M 1)f1d sin(θ) w i a f1 (θ i ) + h : a parametric stochastic process over frequency Parameters: {w i, θ i } p i=1 Good news: M p ρ(dθ)a f1 (θ) + n, w i CN (0, σi 2 ), n CN (0, σni 2 M ) M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 32 / 36

A Possible Approach: Wold s Decomposition Theorem Wold s Decomposition Theorem Any zero-mean WSS process {x f ; f Z} can be uniquely decomposed as x f = b i ε f i + d f i=0 where b 0 = 1 and i=1 b i 2 < ε f is a white noise process. {d f ; f Z} is a deterministic process. E{d f ε f } = 0 f, f Definition A stationary process {d f : f Z} is called deterministic if its current value (d f ) can be predicted using the entire past (d f 1, d f 2,...). M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 33 / 36

FDD Channel Feedback Problem h up = p w i a f1 (θ i ) i=1 }{{} deterministic process + ρ(dθ) a f1 (θ) + n }{{} innovation process M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 34 / 36

FDD Channel Feedback Problem h up = p w i a f1 (θ i ) i=1 }{{} deterministic process + ρ(dθ) a f1 (θ) + n }{{} innovation process The deterministic process may contain a substantial amount of the energy. M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 34 / 36

FDD Channel Feedback Problem h up = p w i a f1 (θ i ) i=1 }{{} deterministic process + ρ(dθ) a f1 (θ) + n }{{} innovation process The deterministic process may contain a substantial amount of the energy. Open Problem III Is there a way to extract the deterministic component? M. Barzegar (TU-Berlin) Massive MIMO II CoSIP 2016 34 / 36

Questions?