Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment


 Homer Nicholson
 7 months ago
 Views:
Transcription
1 Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment JeanPierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS  ENS UPMC Univ. Paris Diderot) Ecole Normale Supérieure (ENS) & Centre d Analyse et de Mathématique Sociales (CAMS, UMR 8557 CNRS  EHESS) Ecole des Hautes Etudes en Sciences Sociales (EHESS)
2 Neural coding Photoreceptors intensities data Retina Principal Component Analysis (PCA) Neural representation: activivities of ganglion cells Representation: projection onto the principal axis environment stimulus network neural representation data algorithm (neural code) signal filter ρ (.) θ X θ = { θ 1, θ 2,, θ N } X = { X 1, X 2,, X p }
3 Ecological approach to sensory coding: efficient adaptation to the natural environment Horace Barlow, 1961 H. B. Barlow. Possible principles underlying the transformation of sensory messages. Sensory Communication, pp , 1961 efficient coding hypothesis sensory processing in the brain should be adapted to natural stimuli e.g. neurons in the visual (or auditory) system of a given animal should be optimized for coding images (or sounds) representative of those found in the natural environment of that animal. It has been shown that filters optimized for coding natural images lead to filters which resemble the receptive fields of simplecells in V1. In the auditory domain, optimizing a network for coding natural sounds leads to filters which resemble the impulse response of cochlear filters found in the inner ear. Formalization: tools from Information Theory, Statistical (Bayesian) inference, parameter estimation
4 Neural coding PCA: max variances, well adapted to Gaussian like distributions More general: «infomax» Max Mutual Information[stimuli ; neural representation] environment stimulus network neural representation data algorithm (neural code) signal filter ρ (.) θ W X θ = { θ 1, θ 2,, θ N } X = { X 1, X 2,, X p }
5 Information Theory (Shannon) Entropy, Shannon information Mutual Information = output entropy equivocation o Capacity o Infomax Redundancy o different types of redundancies o min redundancy (H. Barlow, 1961) = Independent Component Analysis (ICA) Parameter estimation CramerRao inequality Fisher Information
6 Entropy (Shannon information) Basic properties Discrete case Σ k p k = 1 (k=1,, K) H =  Σ k p k ln p k ln K Max entropy: equiprobable distribution, p k = 1/K H = ln K Binary case: 1 bit of information = 1 binary variable with equiprobable states (fair coin) H = ln 2 H / ln 2 = 1 (bit) With the logarithm in base 2: information in bits
7 Binary case binary random variable taking one value with probability f and the other with probability 1f H = f ln f 1 f ln 1 f With logarithms in base 2: information in bits H 2 = f log 2 f (1 f) log 2 (1 f) log 2 (. ) = ln(. ) ln 2 H 2 1 H 2 f = H 2 (1 f) H 2 f = 0 = H 2 (f = 1) = /2 1 f For f = 1 2 H 2 = 1 bit
8 Entropy (Shannon information) Continuous case x X; probability distribution differential entropy H =
9 Entropy (Shannon information) Basic properties Max entropy? Continuous case (differential) entropy H =  ρ (x) ln ρ (x) dx among distributions ρ with support [a, b]: uniform distribution on [a, b]; ρ =1/(ba); H = ln (ba) among distributions ρ with support ], [ ρ (x) = under variance constraint, <x 2 >= σ 2 Gaussian distribution; H = ½ ln(2 π e σ 2 )
10 Entropy (Shannon information): simple examples Uniform distribution Gaussian distribution Multidimensional Gaussian distribution
11 Mutual information environment stimulus neural representation ρ (.) θ Q( X θ ) X = { X 1, X 2,, X p } I [ θ, X ] = [ entropy of X ] [ entropy of X given θ ] (equivocation) = ln Q X Q X d p X + ln Q(X θ) Q(X θ) d p X ρ θ d N θ = Information that X carries about θ = Information that θ carries about X = H X + H θ H(θ, X) P θ, X = ln ρ θ Q X P θ, X dp X d N θ KullbackLeibler divergence between the joint distribution and the product distribution Output distribution (marginal distribution of X): Q X = Q(X θ) ρ θ d N θ Joint distribution of X and θ : P θ, X = Q(X θ) ρ θ
12 Mutual Information = difference of entropies large number of p objects Object type: τ {, } f = probability to have an object of type Classification Data analysis Signal processing Encoding If no error Maxwell s demon H 1 = 0 H 2 = 0 Entropy (Shannon information): H H = p f ln f 1 f ln(1 f) H = Information gain = decrease in entropy I = H  H 1  H 2 = H Box number: σ { 1, 2}
13 Mutual Information = difference of entropies large number of p objects Object type: τ {, } f = probability to have an object of type Classification Data analysis Signal processing Encoding If noise, errors Drunk Maxwell s demon H 1 > 0 H 2 > 0 Box number: σ { 1, 2} Entropy (Shannon information): H H = p f ln f 1 f ln(1 f) H = Information gain = decrease in entropy I = H  H 1  H 2 = mutual information between τ and σ
14 Basic properties of the mutual information For any random variables, X and Y: I(X,Y) 0 I(X,Y) = 0 iff the two random variables are statistically independent Mutual info = relative entropy (KullbackLeibler divergence) between the joint and the factorized distributions Case X discrete: I(X,Y) H(X) ln K (similarly, if Y discrete, I(X,Y) H(Y) ln M ) Data processing theorem S X Y Z I(S,Z), I(S,Y), I(X,Y) I(S,Z) I(S,Y) I(X,Y) I(X,Y) = I(S,Y) + I(X,Y S) I(S,Y)
15 Mutual information environment stimulus neural representation ρ (.) θ Q( X θ ) X = { X 1, X 2,, X p } I [ θ, X ] 0 ( I = 0 θ and X are statistically independent ) Capacity: Q given C = max I [ θ, X ρ ] (transmission channel) principle for optimal coding: Infomax ρ given max I [ θ, X ] Q Redundancy (environment) R = I [ X 1, X 2,, X p ] 0 Barlow s principle: R = Σ k I [ θ, X k min R ]  I [ θ, X ] (can be < 0)
16 Shannon: Communication theory Channel codeword of length decoding: message = = largest number of codewords that can be decoded with a fraction of error Capacity: Memory less channel: ρ = proba. dist. of τ
17 Stimulus S output V Mutual info I (V,S) = output entropy  equivocation equivocation = entropy of the output given the stimulus, averaged over the stimulus distribution A useful particular case: additive noise V = f(s) + noise equivocation? = noise entropy (hence independent of the input distribution) I (V,S) = output entropy  noise entropy
Introduction to Information Theory
Introduction to Information Theory Gurinder Singh Mickey Atwal atwal@cshl.edu Center for Quantitative Biology KullbackLeibler Divergence Summary Shannon s coding theorems Entropy Mutual Information Multiinformation
More informationInformation in Biology
Information in Biology CRI  Centre de Recherches Interdisciplinaires, Paris May 2012 Information processing is an essential part of Life. Thinking about it in quantitative terms may is useful. 1 Living
More informationInformation Theory. Coding and Information Theory. Information Theory Textbooks. Entropy
Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is
More informationEntropies & Information Theory
Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information
More informationIntroduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.
L65 Dept. of Linguistics, Indiana University Fall 205 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission rate
More informationDept. of Linguistics, Indiana University Fall 2015
L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 28 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission
More informationMachine Learning Srihari. Information Theory. Sargur N. Srihari
Information Theory Sargur N. Srihari 1 Topics 1. Entropy as an Information Measure 1. Discrete variable definition Relationship to Code Length 2. Continuous Variable Differential Entropy 2. Maximum Entropy
More informationPopulation Coding. Maneesh Sahani Gatsby Computational Neuroscience Unit University College London
Population Coding Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London Term 1, Autumn 2010 Coding so far... Timeseries for both spikes and stimuli Empirical
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationRevision of Lecture 4
Revision of Lecture 4 We have completed studying digital sources from information theory viewpoint We have learnt all fundamental principles for source coding, provided by information theory Practical
More informationOne Lesson of Information Theory
Institut für One Lesson of Information Theory Prof. Dr.Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@unirostock.de http://www.int.unirostock.de/
More information3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions
Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the
More informationCapacity of AWGN channels
Chapter 3 Capacity of AWGN channels In this chapter we prove that the capacity of an AWGN channel with bandwidth W and signaltonoise ratio SNR is W log 2 (1+SNR) bits per second (b/s). The proof that
More informationInformation Theory, Statistics, and Decision Trees
Information Theory, Statistics, and Decision Trees Léon Bottou COS 424 4/6/2010 Summary 1. Basic information theory. 2. Decision trees. 3. Information theory and statistics. Léon Bottou 2/31 COS 424 4/6/2010
More informationInformation Theory. David Rosenberg. June 15, New York University. David Rosenberg (New York University) DSGA 1003 June 15, / 18
Information Theory David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DSGA 1003 June 15, 2015 1 / 18 A Measure of Information? Consider a discrete random variable
More informationBlock 2: Introduction to Information Theory
Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51 Table of contents 1 Motivation
More informationLateral organization & computation
Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Logpolar
More informationTutorial on Blind Source Separation and Independent Component Analysis
Tutorial on Blind Source Separation and Independent Component Analysis Lucas Parra Adaptive Image & Signal Processing Group Sarnoff Corporation February 09, 2002 Linear Mixtures... problem statement...
More informationMultimedia Communications. Scalar Quantization
Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing
More informationPacific Symposium on Biocomputing 6: (2001)
Analyzing sensory systems with the information distortion function Alexander G Dimitrov and John P Miller Center for Computational Biology Montana State University Bozeman, MT 597153505 falex,jpmg@nervana.montana.edu
More informationEE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions
EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where
More informationCS 630 Basic Probability and Information Theory. Tim Campbell
CS 630 Basic Probability and Information Theory Tim Campbell 21 January 2003 Probability Theory Probability Theory is the study of how best to predict outcomes of events. An experiment (or trial or event)
More informationCoding for Discrete Source
EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively
More informationInformation maximization in a network of linear neurons
Information maximization in a network of linear neurons Holger Arnold May 30, 005 1 Introduction It is known since the work of Hubel and Wiesel [3], that many cells in the early visual areas of mammals
More informationAn Introduction to Independent Components Analysis (ICA)
An Introduction to Independent Components Analysis (ICA) Anish R. Shah, CFA Northfield Information Services Anish@northinfo.com Newport Jun 6, 2008 1 Overview of Talk Review principal components Introduce
More informationAn instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1
Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,
More informationInformation Theory CHAPTER. 5.1 Introduction. 5.2 Entropy
Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is
More informationMAHALAKSHMI ENGINEERING COLLEGETRICHY QUESTION BANK UNIT V PARTA. 1. What is binary symmetric channel (AUC DEC 2006)
MAHALAKSHMI ENGINEERING COLLEGETRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT V PARTA 1. What is binary symmetric channel (AUC DEC 2006) 2. Define information rate? (AUC DEC 2007)
More informationSIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I
SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech  DECS
More informationChapter I: Fundamental Information Theory
ECES622/T62 Notes Chapter I: Fundamental Information Theory Ruifeng Zhang Dept. of Electrical & Computer Eng. Drexel University. Information Source Information is the outcome of some physical processes.
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes
More informationIntroduction to Convolutional Codes, Part 1
Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes
More informationECE598: Informationtheoretic methods in highdimensional statistics Spring 2016
ECE598: Informationtheoretic methods in highdimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma
More informationExercise 1. = P(y a 1)P(a 1 )
Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a
More informationX 1 : X Table 1: Y = X X 2
ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multipleaccess
More informationThe binary entropy function
ECE 7680 Lecture 2 Definitions and Basic Facts Objective: To learn a bunch of definitions about entropy and information measures that will be useful through the quarter, and to present some simple but
More informationInformation Dynamics Foundations and Applications
Gustavo Deco Bernd Schürmann Information Dynamics Foundations and Applications With 89 Illustrations Springer PREFACE vii CHAPTER 1 Introduction 1 CHAPTER 2 Dynamical Systems: An Overview 7 2.1 Deterministic
More informationCoding of memoryless sources 1/35
Coding of memoryless sources 1/35 Outline 1. Morse coding ; 2. Definitions : encoding, encoding efficiency ; 3. fixed length codes, encoding integers ; 4. prefix condition ; 5. Kraft and Mac Millan theorems
More information16.36 Communication Systems Engineering
MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication
More informationDigital Image Processing Lectures 25 & 26
Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image
More informationx log x, which is strictly convex, and use Jensen s Inequality:
2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and
More informationInformation Theory  Entropy. Figure 3
Concept of Information Information Theory  Entropy Figure 3 A typical binary coded digital communication system is shown in Figure 3. What is involved in the transmission of information?  The system
More informationLecture 2. Capacity of the Gaussian channel
Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN
More informationIntroduction to Information Theory. B. Škorić, Physical Aspects of Digital Security, Chapter 2
Introduction to Information Theory B. Škorić, Physical Aspects of Digital Security, Chapter 2 1 Information theory What is it?  formal way of counting information bits Why do we need it?  often used
More informationVariational Principal Components
Variational Principal Components Christopher M. Bishop Microsoft Research 7 J. J. Thomson Avenue, Cambridge, CB3 0FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop In Proceedings
More informationAn Extended Fano s Inequality for the Finite Blocklength Coding
An Extended Fano s Inequality for the Finite Bloclength Coding Yunquan Dong, Pingyi Fan {dongyq8@mails,fpy@mail}.tsinghua.edu.cn Department of Electronic Engineering, Tsinghua University, Beijing, P.R.
More informationGatsby Theoretical Neuroscience Lectures: NonGaussian statistics and natural images Parts III
Gatsby Theoretical Neuroscience Lectures: NonGaussian statistics and natural images Parts III Gatsby Unit University College London 27 Feb 2017 Outline Part I: Theory of ICA Definition and difference
More informationThe homogeneous Poisson process
The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,
More informationCorrelation Detection and an Operational Interpretation of the Rényi Mutual Information
Correlation Detection and an Operational Interpretation of the Rényi Mutual Information Masahito Hayashi 1, Marco Tomamichel 2 1 Graduate School of Mathematics, Nagoya University, and Centre for Quantum
More informationExtract. Data Analysis Tools
Extract Data Analysis Tools Harjoat S. Bhamra July 8, 2017 Contents 1 Introduction 7 I Probability 13 2 Inequalities 15 2.1 Jensen s inequality................................ 17 2.1.1 Arithmetic MeanGeometric
More information(each row defines a probability distribution). Given nstrings x X n, y Y n we can use the absence of memory in the channel to compute
ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html
More informationDimension Reduction (PCA, ICA, CCA, FLD,
Dimension Reduction (PCA, ICA, CCA, FLD, Topic Models) Yi Zhang 10701, Machine Learning, Spring 2011 April 6 th, 2011 Parts of the PCA slides are from previous 10701 lectures 1 Outline Dimension reduction
More informationClassification & Information Theory Lecture #8
Classification & Information Theory Lecture #8 Introduction to Natural Language Processing CMPSCI 585, Fall 2007 University of Massachusetts Amherst Andrew McCallum Today s Main Points Automatically categorizing
More informationLecture 10: Broadcast Channel and Superposition Coding
Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional
More informationVariational Information Maximization in Gaussian Channels
Variational Information Maximization in Gaussian Channels Felix V. Agakov School of Informatics, University of Edinburgh, EH1 2QL, UK felixa@inf.ed.ac.uk, http://anc.ed.ac.uk David Barber IDIAP, Rue du
More informationClassical Information Theory Notes from the lectures by prof Suhov Trieste  june 2006
Classical Information Theory Notes from the lectures by prof Suhov Trieste  june 2006 Fabio Grazioso... July 3, 2006 1 2 Contents 1 Lecture 1, Entropy 4 1.1 Random variable...............................
More informationLECTURE 13. Last time: Lecture outline
LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to
More informationInformation Theory. Week 4 Compressing streams. Iain Murray,
Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 4 Compressing streams Iain Murray, 2014 School of Informatics, University of Edinburgh Jensen s inequality For convex functions: E[f(x)]
More informationNeural networks: Unsupervised learning
Neural networks: Unsupervised learning 1 Previously The supervised learning paradigm: given example inputs x and target outputs t learning the mapping between them the trained network is supposed to give
More informationAn introduction to basic information theory. Hampus Wessman
An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the nonessentials. Theoretical bounds on
More informationHomework Set #2 Data Compression, Huffman code and AEP
Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code
More informationExam, Solutions
Exam,  Solutions Q Constructing a balanced sequence containing three kinds of stimuli Here we design a balanced cyclic sequence for three kinds of stimuli (labeled {,, }, in which every threeelement
More informationarxiv:physics/ v1 [physics.dataan] 24 Apr 2000 Naftali Tishby, 1,2 Fernando C. Pereira, 3 and William Bialek 1
The information bottleneck method arxiv:physics/0004057v1 [physics.dataan] 24 Apr 2000 Naftali Tishby, 1,2 Fernando C. Pereira, 3 and William Bialek 1 1 NEC Research Institute, 4 Independence Way Princeton,
More informationCS546:Learning and NLP Lec 4: Mathematical and Computational Paradigms
CS546:Learning and NLP Lec 4: Mathematical and Computational Paradigms Spring 2009 February 3, 2009 Lecture Some note on statistics Bayesian decision theory Concentration bounds Information theory Introduction
More informationNo. of dimensions 1. No. of centers
Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblockcirculant matrices......
More informationIndependent Component Analysis and Unsupervised Learning. JenTzung Chien
Independent Component Analysis and Unsupervised Learning JenTzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood
More informationMutual Information, Synergy and Some Curious Phenomena for Simple Channels
Mutual Information, Synergy and Some Curious henomena for Simple Channels I. Kontoyiannis Div of Applied Mathematics & Dpt of Computer Science Brown University rovidence, RI 9, USA Email: yiannis@dam.brown.edu
More informationLecture 11: Quantum Information III  Source Coding
CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III  Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that
More informationSynergy, Redundancy, and Independence in Population Codes, Revisited
The Journal of Neuroscience, May 25, 2005 25(21):5195 5206 5195 Behavioral/Systems/Cognitive Synergy, Redundancy, and Independence in Population Codes, Revisited Peter E. Latham 1 and Sheila Nirenberg
More informationStatistical mechanics and capacityapproaching errorcorrectingcodes
Physica A 302 (2001) 14 21 www.elsevier.com/locate/physa Statistical mechanics and capacityapproaching errorcorrectingcodes Nicolas Sourlas Laboratoire de Physique Theorique de l, UMR 8549, Unite Mixte
More informationChapter 2: Source coding
Chapter 2: meghdadi@ensil.unilim.fr University of Limoges Chapter 2: Entropy of Markov Source Chapter 2: Entropy of Markov Source Markov model for information sources Given the present, the future is independent
More informationList Decoding of Reed Solomon Codes
List Decoding of Reed Solomon Codes p. 1/30 List Decoding of Reed Solomon Codes Madhu Sudan MIT CSAIL Background: Reliable Transmission of Information List Decoding of Reed Solomon Codes p. 2/30 List Decoding
More informationmuch more on minimax (order bounds) cf. lecture by Iain Johnstone
much more on minimax (order bounds) cf. lecture by Iain Johnstone http://wwwstat.stanford.edu/~imj/wald/wald1web.pdf today s lecture parametric estimation, Fisher information, CramerRao lower bound:
More informationLower Bounds on the Graphical Complexity of FiniteLength LDPC Codes
Lower Bounds on the Graphical Complexity of FiniteLength LDPC Codes Igal Sason Department of Electrical Engineering Technion  Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International
More informationA Holevotype bound for a Hilbert Schmidt distance measure
Journal of Quantum Information Science, 205, *,** Published Online **** 204 in SciRes. http://www.scirp.org/journal/**** http://dx.doi.org/0.4236/****.204.***** A Holevotype bound for a Hilbert Schmidt
More informationMaximum mutual information vector quantization of loglikelihood ratios for memory efficient HARQ implementations
Downloaded from orbit.dtu.dk on: Apr 29, 2018 Maximum mutual information vector quantization of loglikelihood ratios for memory efficient HARQ implementations Danieli, Matteo; Forchhammer, Søren; Andersen,
More informationShannon's Theory of Communication
Shannon's Theory of Communication An operational introduction 5 September 2014, Introduction to Information Systems Giovanni Sileno g.sileno@uva.nl Leibniz Center for Law University of Amsterdam Fundamental
More information(Classical) Information Theory II: Source coding
(Classical) Information Theory II: Source coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract The information content of a random variable
More informationWhen Do Microcircuits Produce BeyondPairwise Correlations?
When Do Microcircuits Produce BeyondPairwise Correlations? Andrea K. Barreiro,4,, Julijana Gjorgjieva 3,5, Fred Rieke 2, and Eric SheaBrown Department of Applied Mathematics, University of Washington
More informationApplication of Information Theory, Lecture 7. Relative Entropy. Handout Mode. Iftach Haitner. Tel Aviv University.
Application of Information Theory, Lecture 7 Relative Entropy Handout Mode Iftach Haitner Tel Aviv University. December 1, 2015 Iftach Haitner (TAU) Application of Information Theory, Lecture 7 December
More informationAnalysis of neural coding through quantization with an informationbased distortion measure
Submitted to: Network: Comput. Neural Syst. Analysis of neural coding through quantization with an informationbased distortion measure Alexander G. Dimitrov, John P. Miller, Tomáš Gedeon, Zane Aldworth
More informationLec 03 Entropy and Coding II Hoffman and Golomb Coding
CS/EE 5590 / ENG 40 Special Topics Multimedia Communication, Spring 207 Lec 03 Entropy and Coding II Hoffman and Golomb Coding Zhu Li Z. Li Multimedia Communciation, 207 Spring p. Outline Lecture 02 ReCap
More informationC.M. Liu Perceptual Signal Processing Lab College of Computer Science National ChiaoTung University
Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National ChiaoTung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw
More informationPerception of the structure of the physical world using unknown multimodal sensors and effectors
Perception of the structure of the physical world using unknown multimodal sensors and effectors D. Philipona Sony CSL, 6 rue Amyot 75005 Paris, France david.philipona@m4x.org J.K. O Regan Laboratoire
More information4.2 Entropy lost and information gained
4.2. ENTROPY LOST AND INFORMATION GAINED 101 4.2 Entropy lost and information gained Returning to the conversation between Max and Allan, we assumed that Max would receive a complete answer to his question,
More informationEVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS
EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS Ramin Khalili, Kavé Salamatian LIP6CNRS, Université Pierre et Marie Curie. Paris, France. Ramin.khalili, kave.salamatian@lip6.fr Abstract Bit Error
More informationMAXIMUM ENTROPIES COPULAS
MAXIMUM ENTROPIES COPULAS DorianoBoris Pougaza & Ali MohammadDjafari Groupe Problèmes Inverses Laboratoire des Signaux et Systèmes (UMR 8506 CNRS  SUPELEC  UNIV PARIS SUD) Supélec, Plateau de Moulon,
More informationPatch similarity under non Gaussian noise
The 18th IEEE International Conference on Image Processing Brussels, Belgium, September 11 14, 011 Patch similarity under non Gaussian noise Charles Deledalle 1, Florence Tupin 1, Loïc Denis 1 Institut
More information1 Introduction to information theory
1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through
More informationOptimal codes  I. A code is optimal if it has the shortest codeword length L. i i. This can be seen as an optimization problem. min.
Huffman coding Optimal codes  I A code is optimal if it has the shortest codeword length L L m = i= pl i i This can be seen as an optimization problem min i= li subject to D m m i= lp Gabriele Monfardini
More informationInformation Theory. M1 Informatique (parcours recherche et innovation) Aline Roumy. January INRIA Rennes 1/ 73
1/ 73 Information Theory M1 Informatique (parcours recherche et innovation) Aline Roumy INRIA Rennes January 2018 Outline 2/ 73 1 Non mathematical introduction 2 Mathematical introduction: definitions
More informationApproximate Inference Part 1 of 2
Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ Bayesian paradigm Consistent use of probability theory
More informationAdaptive contrast gain control and information maximization $
Neurocomputing 65 66 (2005) 6 www.elsevier.com/locate/neucom Adaptive contrast gain control and information maximization $ Yuguo Yu a,, Tai Sing Lee b a Center for the Neural Basis of Cognition, Carnegie
More informationInformation and Entropy
Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication
More informationLecture 17: Differential Entropy
Lecture 17: Differential Entropy Differential entropy AEP for differential entropy Quantization Maximum differential entropy Estimation counterpart of Fano s inequality Dr. Yao Xie, ECE587, Information
More informationApproximate Inference Part 1 of 2
Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ 1 Bayesian paradigm Consistent use of probability theory
More informationThe Minimum Message Length Principle for Inductive Inference
The Principle for Inductive Inference Centre for Molecular, Environmental, Genetic & Analytic (MEGA) Epidemiology School of Population Health University of Melbourne University of Helsinki, August 25,
More informationSensory Integration and Density Estimation
Sensory Integration and Density Estimation Joseph G. Makin and Philip N. Sabes Center for Integrative Neuroscience/Department of Physiology University of California, San Francisco San Francisco, CA 941430444
More informationMultiple Description Coding for quincunx images.
Multiple Description Coding for quincunx images. Application to satellite transmission. Manuela Pereira, Annabelle Gouze, Marc Antonini and Michel Barlaud { pereira, gouze, am, barlaud }@i3s.unice.fr I3S
More informationOn convergence of Approximate Message Passing
On convergence of Approximate Message Passing Francesco Caltagirone (1), Florent Krzakala (2) and Lenka Zdeborova (1) (1) Institut de Physique Théorique, CEA Saclay (2) LPS, Ecole Normale Supérieure, Paris
More information