Asymptotic Capacity Bounds for Magnetic Recording. Raman Venkataramani Seagate Technology (Joint work with Dieter Arnold)

Similar documents
Chapter 4: Continuous channel and its capacity

Computation of Information Rates from Finite-State Source/Channel Models

ECE Information theory Final (Fall 2008)

Lecture 15: Thu Feb 28, 2019

On the Shamai-Laroia Approximation for the Information Rate of the ISI Channel

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.

Chapter 8: Differential entropy. University of Illinois at Chicago ECE 534, Natasha Devroye

One Lesson of Information Theory

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Estimation of the Capacity of Multipath Infrared Channels

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

X 1 : X Table 1: Y = X X 2

Lecture 18: Gaussian Channel

EE 5345 Biomedical Instrumentation Lecture 12: slides

Lecture 7: Wireless Channels and Diversity Advanced Digital Communications (EQ2410) 1

Adaptive Modulation with Smoothed Flow Utility

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

Direct-Sequence Spread-Spectrum

Multi User Detection I

The Poisson Channel with Side Information

Information Theoretic Imaging

EE Introduction to Digital Communications Homework 8 Solutions

Lecture 2. Capacity of the Gaussian channel

Constellation Shaping for Communication Channels with Quantized Outputs

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

that efficiently utilizes the total available channel bandwidth W.

Signal Design for Band-Limited Channels

Lecture 5b: Line Codes

Does soft-decision ECC improve endurance?

ELEC546 Review of Information Theory

Decision-Point Signal to Noise Ratio (SNR)

Revision of Lecture 5

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

ADAPTIVE EQUALIZATION AT MULTI-GHZ DATARATES

1-bit quantization and oversampling at the receiver: Sequence based communication

EE401: Advanced Communication Theory

Example: Bipolar NRZ (non-return-to-zero) signaling

New Rank-One Matrix Decomposition Techniques and Applications to Signal Processing

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

Simulation-Based Computation of Information Rates for Channels with Memory

Iterative Timing Recovery

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

Exercises with solutions (Set D)

19. Channel coding: energy-per-bit, continuous-time channels

Signal Processing for Digital Data Storage (11)

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

much more on minimax (order bounds) cf. lecture by Iain Johnstone

EE4061 Communication Systems

Lecture 5: Asymptotic Equipartition Property

The Fading Number of a Multiple-Access Rician Fading Channel

Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1

Weiyao Lin. Shanghai Jiao Tong University. Chapter 5: Digital Transmission through Baseband slchannels Textbook: Ch

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL

Gaussian channel. Information theory 2013, lecture 6. Jens Sjölund. 8 May Jens Sjölund (IMT, LiU) Gaussian channel 1 / 26

Capacity of the Discrete-Time AWGN Channel Under Output Quantization

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1

Lecture 4. Capacity of Fading Channels

EE 661: Modulation Theory Solutions to Homework 6

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

The Continuing Miracle of Information Storage Technology Paul H. Siegel Director, CMRR University of California, San Diego

Digital Communications

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Dispersion of the Gilbert-Elliott Channel

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

Method for analytically calculating BER (bit error rate) in presence of non-linearity. Gaurav Malhotra Xilinx

Information Theory for Dispersion-Free Fiber Channels with Distributed Amplification

OPTIMAL FINITE ALPHABET SOURCES OVER PARTIAL RESPONSE CHANNELS. A Thesis DEEPAK KUMAR

EE456 Digital Communications

2A1H Time-Frequency Analysis II

EE5713 : Advanced Digital Communications

Appendix B Information theory from first principles

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks

Residual Versus Suppressed-Carrier Coherent Communications

On the Capacity of Markov Sources over Noisy Channels

Memory in Classical Information Theory: A Brief History

Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals

information estimation feedback

On the Optimality of Treating Interference as Noise in Competitive Scenarios

BASICS OF DETECTION AND ESTIMATION THEORY

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Revision of Lecture 4

Sample Problems for the 9th Quiz

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Performance evaluation for ML sequence detection in ISI channels with Gauss Markov Noise

Square Root Raised Cosine Filter

Performance Analysis of Spread Spectrum CDMA systems

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

Massachusetts Institute of Technology

Digital Modulation 1

Characterization of Heat-Assisted Magnetic Recording Channels

The Method of Types and Its Application to Information Hiding

Lecture 1 From Continuous-Time to Discrete-Time

Constellation Shaping for Communication Channels with Quantized Outputs

Pulse Shaping and ISI (Proakis: chapter 10.1, 10.3) EEE3012 Spring 2018

Principles of Communications

Transcription:

Asymptotic Capacity Bounds for Magnetic Recording Raman Venkataramani Seagate Technology (Joint work with Dieter Arnold)

Outline Problem Statement Signal and Noise Models for Magnetic Recording Capacity Bounds Additive noise dominated channels Jitter noise dominated channels

Problem Statement How does the capacity of the system scale with the bit-width? T What is the optimum bit width from a purely information theoretic viewpoint?

Signal and Noise Models Signal: the magnetization in the medium is either ±1 Noise: electronic noise and transition noise.

Signal and Noise Models x(t) = k a k p(t kt ), p(t) = u(t) u(t T ) r(t) = k a k a k 1 2 h T (t kt t k ) + w(t) a k {+1, 1} t k : transition noise w(t) : electronic noise AWGN with PSD N 0 /2 h T (t) : transition ( 1 +1) response

Recording Modes Magnetization profile for longitudinal recording Magnetization profile for perpendicular recording

Transition Response Longitudinal recording h T (t) = Perpendicular recording A 1 + (2t/w) 2 h T (t) = A erf ( 2 log 2t ) w The pulse width is sometimes also written as PW50.

Capacity The fundamental quantity of interest is the areal capacity of the magnetic medium. In the Shannon sense, this is the number of bits per unit area that can be stored and recovered reliably. Assuming a fixed track width, we can focus on the linear density of bits or linear capacity: C = lim τ 1 τ max({x(t), t [0, τ]}; {r(t), t [0, τ]})

Problem Statement How is the optimal bit width capacity per unit length? T for maximum Misconceptions: Very small bit-widths are bad because... There is higher transition noise. There is longer ISI and lower SNR per bit. Truth: We can overcome all these problems with clever coding.

Thought Experiment No code T = 1 R = 1 Repetition code T = 0.5 R = 0.5 If we use better code, case 2 will outperform case 1.

1 Binary Entropy Function Repetition Code 0.8 0.6 Code Rate 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 Transition Probability Achievable region of code rate and transition probability.

A Note about the SNR We define: SNR = E iw 2 where E i = N 0 R Reasons for this definition: dh T (t) 2 dt dt It is finite for both perpendicular and longitudinal modes. It is a function of the head and medium alone. It is independent of or the code we use. T

Electronic Noise Dominated Channel x(t) = k a k p(t kt ), p(t) = u(t) u(t T ) r(t) = k (a k a k 1 )h T (t kt ) + w(t) = k a k h D (t kt ) + w(t) h D (t) = h T (t) h T (t T ) : dibit response w(t) : electronic noise AWGN with PSD N 0 /2

Sufficient statistics To be exact, we need to pass through a matched filter and sample at baud rate. T w r(t) If, it is almost optimal to use a simple low-pass filter and sample a baud rate:

Discrete-time Model We obtain an ISI channel with binary inputs: r n = k a k hn k + w n w n N(0, N 0 /2T ), i.i.d. where, hn h D (nt ) Let the above ISI channel have capacity. Then, the linear capacity is K B (T ) = 1 T C B(T ) C B (T )

Simple Upper Bound Suppose we relax the input constraint: {a k = ±1} {a k R, E a 2 k 1} We obtain a Gaussian channel, whose capacity is computed using the water-filling method. C B (T ) C G (T ) C G (T ) = max S x [ ] 1 2 such that 0.5 log 0.5 0.5 0.5 (1 + S x[ν] H[ν] 2 S x [ν]dν = 1 N 0 /2T )

Lower Bounds Several researchers have computed information rates for binary ISI channels: Monte-Carlo method: compute the quantity I(x N ; r N )/N for a very long sequence of length N with the input generated using an appropriate Markov model. This method becomes unreliable or computationally intense for high recording densities.

Shamai-Laroia Conjecture Along any horizontal line, the SNR loss due to ISI is the same for both Gaussian and binary signalling.

Shamai-Laroia Conjecture Suppose that the input to the binary ISI channel is independent and uniformly (IUD) distributed C B (T ) C BP SK (2 I G(T ) 1) where I G (T ) is the i.i.d Gaussian information rate. Remark: This bound is also found to be surprisingly tight for non IUD Markov sources up to memory 6 and we have C B (T ) C BP SK (2 C G(T ) 1)

Optimal Input Distribution For Gaussian inputs, the optimal input spectrum is not flat. It becomes highly non i.i.d as T 0 For binary inputs, we could mimic the Gaussian water-filling spectrum, but it is neither optimal nor achievable in general. For the binary case, we optimize over firstorder Markov sources to get a lower bound.

Markov Source Model The first order model is specified by one probability parameter p T For each we pick the optimal probability to maximize the Shamai-Laroia bound. As T 0 we find that p 0

8 7 6 Gauss waterfilling MS1 optimized bits/use/t 5 4 3 2 MS0 i.u.d. 1 0 1 0.8 0.6 0.4 T 0.2 0 30 20 SNR 10 0 Linear capacity bounds for longitudinal recording

2.5 Gauss waterfilling 2 MS1 optimized bits/use/t 1.5 1 0.5 MS0 i.u.d. 0 1 0.8 0.6 0.4 T 0.2 0 30 20 SNR 10 0 Linear capacity bounds for perpendicular recording

Jitter Noise Dominated Channel Most general method: pulse-width modulation r(t) = n ( 1) n h T (t σ n w n ) σ n = n t k, w n N(0, σj 2 ) k=1

Capacity Since there is no additive noise, we can use a zero forcing equalizer to estimate ˆσ n = σ n + w n Thus, ˆt n = ˆσ n ˆσ n 1 = t n + w n w n 1 These are sufficient statistics and the channel is ISI! Linear capacity: C = max p(t N ) I( ˆT N ; T N ) E n T n

Lower Bound on Capacity We can show that the following is a lower bound on the linear capacity: K 2 2 1 2 K 0 where K 0 is the capacity of the ISI-free channel: i.e., ˆt n = t n + w n I( K 0 = max ˆT ; T ) p(t ) E T Remark: This constant is not important. The bound behaves like. 1/σ j

Summary From a purely information theoretic viewpoint there are considerable gains at we higher recording densities. This ignores timing recovery and other implementation issues. In today s recording medium transition noise is a dominant noise source and σ j 0.1T We should lower potential gains. T by a factor of 10 to see