Complex Systems Methods 3. Statistical complexity of temporal sequences

Size: px
Start display at page:

Download "Complex Systems Methods 3. Statistical complexity of temporal sequences"

Transcription

1 Complex Systems Methods 3. Statistical complexity of temporal sequences Eckehard Olbrich MPI MiS Leipzig Potsdam WS 2007/08 Olbrich (Leipzig) / 24

2 Overview 1 Summary Kolmogorov sufficient statistic 2 Intuitive notions of complexity 3 Statistical complexity Entropy convergence and excess entropy Predicitive information 4 Entropy estimation Entropy of a discrete random variable finite sample corrections Olbrich (Leipzig) / 24

3 Summary Kolmogorov sufficient statistic Kolmogorv complexity K U (x l(x)) = min l(p) p:u(p,l(x))=x Kolmogorov structure function, x n denotes a string of length n K k (x n n) = min p : l(p) k U(p, n) = S x n S {0, 1} n Kolmogorv sufficient statistic: least k such that K k (x n n) + k K(x n n) + c. log S Regularities in x are desribed by describing the set S. Given S the string x is random, Algorithmic complexity randomness. But intuitively, complexity measure should quantify structure, not randomness. Thus it is related to an ensemble and not a single object. Olbrich (Leipzig) / 24

4 Dynamical Systems all continuous discetized space discretized time discretized state Partial Differential Equations (PDE s) coupled ordinary differential equations (ODE s) coupled map lattice (CML) cellular automata (CA) all dynamical systems either deterministic or stochastic Digital computer: Finite state automaton - finite number of discrete states Olbrich (Leipzig) / 24

5 Intuitive notions of complexity Stephen Wolframs classification of cellular automata Elementary cellular automata: binary states {0, 1}. Next neighbour interaction. Rules can be coded by numbers 0,..., 255: e.g. Rule Class 1: Evolution leads to a homogeneous state. Class 2: Evolution leads to a set of separated simple stable or periodic structures. Class 3: Evolution leads to a chaotic pattern. Class 4: Evolution leads to complex localized structures, sometimes long-lived. Olbrich (Leipzig) / 24

6 Intuitive notions of complexity Stephen Wolframs classification of cellular automata time 200 time 200 time class 2 class 4 class 3 Olbrich (Leipzig) / 24

7 Intuitive notions of complexity Stephen Wolframs classification of cellular automata Propagation of a single perturbation difference rule 14 difference rule 110 difference rule time 200 time 200 time class 2 class 4 class 3 ordered complex chaotic Olbrich (Leipzig) / 24

8 Statistical complexity excess entropy K(x l(x)) minimal length of a program, which produces exactly the string x. Most complex strings are algorithmically random. Kolmogorov sufficient statistic: program that describes all regularities in x and computes the set S, which containes all strings with the same regularities as in x, but are otherwise random. Algorithmic complexity can be divided in two parts - regularities and randomness. Same idea for a time series (infinite sequence) provided with a stationary distribution: Randomness per symbol is given by the entropy rate h = lim n h n h n = H(X 0 X 1,..., X n ) Regularities quantified by the excess entropy (Crutchfield) or effective measure complexity (Grassberger) E = lim n E n with E n = (H n n h n ) and H n = H(X n,..., X 1 ) Olbrich (Leipzig) / 24

9 Entropy convergence and excess entropy The idea which led originally to this complexity measure was to use the converge rate of the conditional entropy to quantify the complexity of a time series: fast convergence short memory low complexity slow convergence long memory high complexity. Conditional entropies h n = H(X 0 X 1,..., X n ) = H(X 0, X 1,..., X n ) H(X 1,..., X n ) decrease monotonically. Decay is quantified by the conditional mutual information δh n := h n 1 h n = MI (X 0 : X n X 1,..., X n ). Olbrich (Leipzig) / 24

10 Entropy convergence and excess entropy Excess entropy: E N = H(X 1,..., X N ) Nh N N 1 = (h n h N ) using h 0 = H(X 1 ) = = n=0 N 1 N n=0 k=n+1 N kδh k k=1 δh k using h n 1 = δh n + h n The slower the convergence of the entropy the larger the excess entropy. If the sequence is Markov of order m: δh n = 0 for n > m. Thus E = E m. Olbrich (Leipzig) / 24

11 Predictive information Bialek et al. (2000) proposed to quantify the complexity of a time series by the amount of information which the past tells us about the future I pred = MI (past : future) = lim MI (X n n p,n f p,..., X 0 : X 1,..., X nf ) and called this predictive information. The predictive information is equal to the excess entropy if the corresponding limits exist: I pred = E Olbrich (Leipzig) / 24

12 Predictive information I pred = lim MI (X n n p,n f p,..., X 1, X 0 : X 1, X 2,..., X nf ) { = lim H(X 1,..., xnf ) + H(X np,..., X 0 ) n p,n f H(X np,..., X 0,..., X np ) } { = lim Enf + n f h nf + E np+1 + (n p + 1)h np+1 n p,n f } E np+nf +1 (n f + n p + 1)h nf +n p+1 = E The excess entropy measures the amount of information which is available from the past for predicting the time series. Olbrich (Leipzig) / 24

13 Causal states Causal equivalence: Two past sequences x 0 = x 0, x 1,... and x 0 = x 0, x 1,... are causal equivalent, if they have the same future, i.e. p(x1 x ) 0 = p(x1 x 0 ). The equivalence classes are called causal states. This notion was introduced by James P. Crutchfield et al. They called the transition graph between the causal states an ɛ-machine. In general it seems to be a subclass of hidden Markov models. Crutchfield and Young (1989): Statistical complexity C µ as the entropy of the stationary distribution over the states. In general there is C µ E. Grassberger (1986) called it set complexity in the context of regular languages. In general: The excess entropy provides a lower bound for the amount of information necessary for an optimal prediction. Olbrich (Leipzig) / 24

14 Excess entropy some examples Sequence with period n: h m = 0 for m n, in particular h = 0. The entropy H m n = log n remains constant for m n, thus E = log n. All periodic sequences of the same length have the same complexity according to this measure. (Alternative: transient information introduced by Crutchfield and Feldman). Markov chain: h = h 1. E = δh 1 = h 0 h 1 = MI (X n+1 : X n ). Chaotic maps: Usually exponential decay of h n finite E. Feigenbaum point: h = 0, but h n 1/n, thus E diverges. Olbrich (Leipzig) / 24

15 Complexity for the period doubling route to chaos From Crutchfield/Young Computation at the onset of chaos. Olbrich (Leipzig) / 24

16 Excess entropy and attractor dimension Deterministic system with continous state observables: The entropy H m (ɛ) = H(X 1,..., X m ; ɛ) scales with respect to the resolution ɛ: m < D H m (ɛ) = H c m m log ɛ + O(ɛ) m > D H m (ɛ) = const D log ɛ + O(ɛ) Because h = h KS does not depend on ɛ we have E D log ɛ. The better one knows the initial conditions of the system the better the system can be predicted. Olbrich (Leipzig) / 24

17 References Peter Grassberger, Toward a quantitative theory of self-generated complexity, International Journal of Theoretical Physics, 25 (1986), James P. Crutchfield and David P. Feldman, Regularities unseen, randomness observed: Levels of entropy convergence Chaos, 13 (2003), William Bialek, Ilya Nemenman and Naftali Tishby, Predictability, complexity, and learning. Neural Computation, 13 (2001), Remo Badii and Antonio Politi, Complexity Hierarchical structures and scaling in physics, Cambridge University Press, Olbrich (Leipzig) / 24

18 Entropy of a discrete randomm variable finite sample corrections Reference: P. Grassberger, Entropy Estimates from Insufficient Samplings, arxiv:physics/ v1 N data points randomly and independently distributed on M boxes. The number n i of points in each box is a random variable with an expectation value z i := E[n i ] = p i N. Their distribution is binomial ( ) N P(n i ; p i, N) = p n i i (1 p i ) N n i n i For p i 1 i the n i can be assumed to be Poisson distributed P(n i ; z i ) = zn i i n i! e z i Olbrich (Leipzig) / 24

19 Entropy of a discrete random variable finite sample corrections Entropy M H = p i log p i = ln N 1 N i=1 Naive estimator M z i ln z i i=1 Ĥ naive = ln N 1 N M n i ln n i i=1 In general the estimator is biased, i.e. H := E[Ĥ] H 0. H naive < 0. Olbrich (Leipzig) / 24

20 Entropy of a discrete random variable finite sample corrections In the limit of large N and M each contribution z i ln z i will be statistically independent and can be estimated as function of n i : z i ln z i ˆ z i ln z i = n i φ(n i ) E[ ˆ z i ln z i ] = n i φ(n i )P(n i ; z i ). n i =1 Implizit assumption: n i = 0 gives no information about p i. Estimator Ĥ φ = ln N 1 N M n i φ(n i ) i=1 Olbrich (Leipzig) / 24

21 Finite sample corrections Grassbergers Result 1 Grassbergers result: with the digamma function E[nψ(n)] = z ln z + ze 1 (z) ψ(x) = d ln Γ(x) dx and E 1 = Γ(0, x) = e xt 1 t dt. For large z ze 1 (z) e z, thus neglecting this term gives the estimator Ĥ ψ = ln N 1 M n i ψ(n i ). N i=1 Olbrich (Leipzig) / 24

22 Finite sample corrections Comparison with other results Approximations for the digamma function leads to known estimators: 1 ψ(x) ln x: naive estimator 2 ψ(x) ln x 1/(2x): Miller correction 3 ψ(x) < ln x 1/(2x) < ln x leads to Ĥψ > ĤMiller > Ĥnaive Grassbergers best estimator: M Ĥ G = ln N 1 N i=1 n i G ni with G n = ψ(n) + ( 1) n k=1 1 (n + 2k)(n + 2k + 1) or G 1 = γ ln 2 G 2 = 2 γ ln 2 G 2n+1 = G 2n and G 2n+2 = G 2n + 2 2n+1 for n 1. Olbrich (Leipzig) / 24

23 Finite sample corrections Comparison between different corrections Grassberger G ψ(n)+( 1) n /(n(n+1)) ψ(n) log(n) 1/2n log(n) φ(n) Note that ψ(x) + ( 1)n n(n+1) G n = ψ(n) + ( 1) n n corresponds to an approximation of k=1 with considering only the k = 0 term in the sum. 1 (n + 2k)(n + 2k + 1) Olbrich (Leipzig) / 24

24 Test Entropy of a Gaussian distribution H(ε)+log(ε) naive N=10 4 Miller ψ(n) ψ(n)+( 1) n /(n(n+1)) naive N=10 6 1/2 (1+log(2 π)) ε Differential entropy of a Gaussian distribution H C Gauss = 1 2 (1 + log(2πσ2 )) = H(ɛ) + log(ɛ) + O(ɛ) Olbrich (Leipzig) / 24

Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity

Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Eckehard Olbrich MPI MiS Leipzig Potsdam WS 2007/08 Olbrich (Leipzig) 26.10.2007 1 / 18 Overview 1 Summary

More information

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 How Random Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 1 Center for Complex Systems Research and Department of Physics University

More information

Permutation Excess Entropy and Mutual Information between the Past and Future

Permutation Excess Entropy and Mutual Information between the Past and Future Permutation Excess Entropy and Mutual Information between the Past and Future Taichi Haruna 1, Kohei Nakajima 2 1 Department of Earth & Planetary Sciences, Graduate School of Science, Kobe University,

More information

Statistical Complexity of Simple 1D Spin Systems

Statistical Complexity of Simple 1D Spin Systems Statistical Complexity of Simple 1D Spin Systems James P. Crutchfield Physics Department, University of California, Berkeley, CA 94720-7300 and Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501

More information

Chaos, Complexity, and Inference (36-462)

Chaos, Complexity, and Inference (36-462) Chaos, Complexity, and Inference (36-462) Lecture 5: Symbolic Dynamics; Making Discrete Stochastic Processes from Continuous Deterministic Dynamics Cosma Shalizi 27 January 2009 Symbolic dynamics Reducing

More information

Reconstruction Deconstruction:

Reconstruction Deconstruction: Reconstruction Deconstruction: A Brief History of Building Models of Nonlinear Dynamical Systems Jim Crutchfield Center for Computational Science & Engineering Physics Department University of California,

More information

Anatomy of a Bit. Reading for this lecture: CMR articles Yeung and Anatomy.

Anatomy of a Bit. Reading for this lecture: CMR articles Yeung and Anatomy. Anatomy of a Bit Reading for this lecture: CMR articles Yeung and Anatomy. 1 Information in a single measurement: Alternative rationale for entropy and entropy rate. k measurement outcomes Stored in log

More information

Optimal predictive inference

Optimal predictive inference Optimal predictive inference Susanne Still University of Hawaii at Manoa Information and Computer Sciences S. Still, J. P. Crutchfield. Structure or Noise? http://lanl.arxiv.org/abs/0708.0654 S. Still,

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

predictive information, multi-information, and binding information

predictive information, multi-information, and binding information predictive information, multi-information, and binding information Samer Abdallah and Mark Plumbley Centre for Digital Music, Queen Mary, University of London Technical Report C4DM-TR10-10 Version 1.1

More information

Predictive information in Gaussian processes with application to music analysis

Predictive information in Gaussian processes with application to music analysis Predictive information in Gaussian processes with application to music analysis Samer Abdallah 1 and Mark Plumbley 2 1 University College London 2 Queen Mary University of London Abstract. We describe

More information

Optimal Deep Learning and the Information Bottleneck method

Optimal Deep Learning and the Information Bottleneck method 1 Optimal Deep Learning and the Information Bottleneck method ICRI-CI retreat, Haifa, May 2015 Naftali Tishby Noga Zaslavsky School of Engineering and Computer Science The Edmond & Lily Safra Center for

More information

Local measures of information storage in complex distributed computation

Local measures of information storage in complex distributed computation Local measures of information storage in complex distributed computation Joseph T. Lizier, 1, 2, 3, Mikhail Prokopenko, 2 and Albert Y. Zomaya 3 1 Max Planck Institute for Mathematics in the Sciences,

More information

Computational Mechanics of the Two Dimensional BTW Model

Computational Mechanics of the Two Dimensional BTW Model Computational Mechanics of the Two Dimensional BTW Model Rajesh Kommu kommu@physics.ucdavis.edu June 8, 2010 Abstract Some aspects of computational mechanics in two dimensions are investigated in this

More information

How Random is a Coin Toss? Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos

How Random is a Coin Toss? Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos How Random is a Coin Toss? Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos Christopher C. Strelioff Center for Complex Systems Research and Department of Physics University of Illinois

More information

Section 11.1 Sequences

Section 11.1 Sequences Math 152 c Lynch 1 of 8 Section 11.1 Sequences A sequence is a list of numbers written in a definite order: a 1, a 2, a 3,..., a n,... Notation. The sequence {a 1, a 2, a 3,...} can also be written {a

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations

Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations Melanie Mitchell 1, Peter T. Hraber 1, and James P. Crutchfield 2 Abstract We present results from an experiment similar

More information

Cellular automata in noise, computing and self-organizing

Cellular automata in noise, computing and self-organizing Cellular automata in noise, computing and self-organizing Peter Gács Boston University Quantum Foundations workshop, August 2014 Goal: Outline some old results about reliable cellular automata. Why relevant:

More information

Lecture 6: Entropy Rate

Lecture 6: Entropy Rate Lecture 6: Entropy Rate Entropy rate H(X) Random walk on graph Dr. Yao Xie, ECE587, Information Theory, Duke University Coin tossing versus poker Toss a fair coin and see and sequence Head, Tail, Tail,

More information

3. If a choice is broken down into two successive choices, the original H should be the weighted sum of the individual values of H.

3. If a choice is broken down into two successive choices, the original H should be the weighted sum of the individual values of H. Appendix A Information Theory A.1 Entropy Shannon (Shanon, 1948) developed the concept of entropy to measure the uncertainty of a discrete random variable. Suppose X is a discrete random variable that

More information

Quantum versus Classical Measures of Complexity on Classical Information

Quantum versus Classical Measures of Complexity on Classical Information Quantum versus Classical Measures of Complexity on Classical Information Lock Yue Chew, PhD Division of Physics and Applied Physics School of Physical & Mathematical Sciences & Complexity Institute Nanyang

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

10-704: Information Processing and Learning Fall Lecture 9: Sept 28

10-704: Information Processing and Learning Fall Lecture 9: Sept 28 10-704: Information Processing and Learning Fall 2016 Lecturer: Siheng Chen Lecture 9: Sept 28 Note: These notes are based on scribed notes from Spring15 offering of this course. LaTeX template courtesy

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

The Organization of Intrinsic Computation: Complexity-Entropy Diagrams and the Diversity of Natural Information Processing

The Organization of Intrinsic Computation: Complexity-Entropy Diagrams and the Diversity of Natural Information Processing Santa Fe Institute Working Paper 8-6-XXX arxiv.org:86.4789 [nlin.cd] The Organization of Intrinsic Computation: Complexity-Entropy Diagrams and the Diversity of Natural Information Processing David P.

More information

Informational and Causal Architecture of Discrete-Time Renewal Processes

Informational and Causal Architecture of Discrete-Time Renewal Processes Entropy 2015, 17, 4891-4917; doi:10.3390/e17074891 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Informational and Causal Architecture of Discrete-Time Renewal Processes Sarah

More information

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it

More information

PHY411 Lecture notes Part 5

PHY411 Lecture notes Part 5 PHY411 Lecture notes Part 5 Alice Quillen January 27, 2016 Contents 0.1 Introduction.................................... 1 1 Symbolic Dynamics 2 1.1 The Shift map.................................. 3 1.2

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Detecting non-trivial computation in complex dynamics

Detecting non-trivial computation in complex dynamics Detecting non-trivial computation in complex dynamics Joseph T. Lizier 1,2, Mikhail Prokopenko 1, and Albert Y. Zomaya 2 1 CSIRO Information and Communications Technology Centre, Locked Bag 17, North Ryde,

More information

CISC 876: Kolmogorov Complexity

CISC 876: Kolmogorov Complexity March 27, 2007 Outline 1 Introduction 2 Definition Incompressibility and Randomness 3 Prefix Complexity Resource-Bounded K-Complexity 4 Incompressibility Method Gödel s Incompleteness Theorem 5 Outline

More information

Characterizing chaotic time series

Characterizing chaotic time series Characterizing chaotic time series Jianbo Gao PMB InTelliGence, LLC, West Lafayette, IN 47906 Mechanical and Materials Engineering, Wright State University jbgao.pmb@gmail.com http://www.gao.ece.ufl.edu/

More information

10-704: Information Processing and Learning Spring Lecture 8: Feb 5

10-704: Information Processing and Learning Spring Lecture 8: Feb 5 10-704: Information Processing and Learning Spring 2015 Lecture 8: Feb 5 Lecturer: Aarti Singh Scribe: Siheng Chen Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal

More information

Path integrals for classical Markov processes

Path integrals for classical Markov processes Path integrals for classical Markov processes Hugo Touchette National Institute for Theoretical Physics (NITheP) Stellenbosch, South Africa Chris Engelbrecht Summer School on Non-Linear Phenomena in Field

More information

Pattern Discovery in Time Series, Part I: Theory, Algorithm, Analysis, and Convergence

Pattern Discovery in Time Series, Part I: Theory, Algorithm, Analysis, and Convergence Journal of Machine earning Research? (2002)?? Submitted 0/28; Published?/02 Pattern Discovery in Time Series, Part I: Theory, Algorithm, Analysis, and Convergence Cosma Rohilla Shalizi shalizi@santafe.edu

More information

Games and Abstract Inductive definitions

Games and Abstract Inductive definitions University of Bristol Kolkata, 5.i.2007 www.maths.bris.ac.uk/ mapdw Introduction 1) Ordinals and operators. (i) Ordinals (ii) Operators, monotone and non-monotone. 2) Circular Definitions (Gupta-Belnap).

More information

Cellular Automata. Jason Frank Mathematical Institute

Cellular Automata. Jason Frank Mathematical Institute Cellular Automata Jason Frank Mathematical Institute WISM484 Introduction to Complex Systems, Utrecht University, 2015 Cellular Automata Game of Life: Simulator: http://www.bitstorm.org/gameoflife/ Hawking:

More information

Cellular automata are idealized models of complex systems Large network of simple components Limited communication among components No central

Cellular automata are idealized models of complex systems Large network of simple components Limited communication among components No central Cellular automata are idealized models of complex systems Large network of simple components Limited communication among components No central control Complex dynamics from simple rules Capability of information

More information

Workshop on Heterogeneous Computing, 16-20, July No Monte Carlo is safe Monte Carlo - more so parallel Monte Carlo

Workshop on Heterogeneous Computing, 16-20, July No Monte Carlo is safe Monte Carlo - more so parallel Monte Carlo Workshop on Heterogeneous Computing, 16-20, July 2012 No Monte Carlo is safe Monte Carlo - more so parallel Monte Carlo K. P. N. Murthy School of Physics, University of Hyderabad July 19, 2012 K P N Murthy

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Mechanisms of Emergent Computation in Cellular Automata

Mechanisms of Emergent Computation in Cellular Automata Mechanisms of Emergent Computation in Cellular Automata Wim Hordijk, James P. Crutchfield, Melanie Mitchell Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, 87501 NM, USA email: {wim,chaos,mm}@santafe.edu

More information

On Computational Limitations of Neural Network Architectures

On Computational Limitations of Neural Network Architectures On Computational Limitations of Neural Network Architectures Achim Hoffmann + 1 In short A powerful method for analyzing the computational abilities of neural networks based on algorithmic information

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output

More information

Computation in Sofic Quantum Dynamical Systems

Computation in Sofic Quantum Dynamical Systems Santa Fe Institute Working Paper 07-05-007 arxiv:0704.3075 Computation in Sofic Quantum Dynamical Systems Karoline Wiesner 1, and James P. Crutchfield 1, 1 Center for Computational Science & Engineering

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Hierarchical Quantification of Synergy in Channels by Paolo Perrone and Nihat Ay Preprint no.: 86 2015 Hierarchical Quantification

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Comparing Information-Theoretic Measures of Complexity in Boltzmann Machines

Comparing Information-Theoretic Measures of Complexity in Boltzmann Machines entropy Article Comparing Information-Theoretic Measures of in Boltzmann Machines Maxinder S. Kanwal 1, *, Joshua A. Grochow 2,3 and Nihat Ay 3,4,5 1 Department of Electrical Engineering and Computer Science,

More information

Persistent Chaos in High-Dimensional Neural Networks

Persistent Chaos in High-Dimensional Neural Networks Persistent Chaos in High-Dimensional Neural Networks D. J. Albers with J. C. Sprott and James P. Crutchfield February 20, 2005 1 Outline: Introduction and motivation Mathematical versus computational dynamics

More information

Towards a Theory of Information Flow in the Finitary Process Soup

Towards a Theory of Information Flow in the Finitary Process Soup Towards a Theory of in the Finitary Process Department of Computer Science and Complexity Sciences Center University of California at Davis June 1, 2010 Goals Analyze model of evolutionary self-organization

More information

A Relation between Complexity and Entropy for Markov Chains and Regular Languages

A Relation between Complexity and Entropy for Markov Chains and Regular Languages A Relation between Complexity and Entropy for Markov Chains and Regular Languages Wentian Li SFI WORKING PAPER: 1990--025 SFI Working Papers contain accounts of scientific work of the author(s) and do

More information

Homework 1 Due: Thursday 2/5/2015. Instructions: Turn in your homework in class on Thursday 2/5/2015

Homework 1 Due: Thursday 2/5/2015. Instructions: Turn in your homework in class on Thursday 2/5/2015 10-704 Homework 1 Due: Thursday 2/5/2015 Instructions: Turn in your homework in class on Thursday 2/5/2015 1. Information Theory Basics and Inequalities C&T 2.47, 2.29 (a) A deck of n cards in order 1,

More information

Are chaotic systems dynamically random?

Are chaotic systems dynamically random? Are chaotic systems dynamically random? Karl Svozil Institute for Theoretical Physics, Technical University Vienna, Karlsplatz 13, A 1040 Vienna, Austria. November 18, 2002 Abstract Physical systems can

More information

1.2. Introduction to Modeling. P (t) = r P (t) (b) When r > 0 this is the exponential growth equation.

1.2. Introduction to Modeling. P (t) = r P (t) (b) When r > 0 this is the exponential growth equation. G. NAGY ODE January 9, 2018 1 1.2. Introduction to Modeling Section Objective(s): Review of Exponential Growth. The Logistic Population Model. Competing Species Model. Overview of Mathematical Models.

More information

Estimation of information-theoretic quantities

Estimation of information-theoretic quantities Estimation of information-theoretic quantities Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk November 16, 2004 Some

More information

6 Continuous-Time Birth and Death Chains

6 Continuous-Time Birth and Death Chains 6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.

More information

Coding on Countably Infinite Alphabets

Coding on Countably Infinite Alphabets Coding on Countably Infinite Alphabets Non-parametric Information Theory Licence de droits d usage Outline Lossless Coding on infinite alphabets Source Coding Universal Coding Infinite Alphabets Enveloppe

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Inaccessibility and undecidability in computation, geometry, and dynamical systems

Inaccessibility and undecidability in computation, geometry, and dynamical systems Physica D 155 (2001) 1 33 Inaccessibility and undecidability in computation, geometry, and dynamical systems Asaki Saito a,, Kunihiko Kaneko b a Laboratory for Mathematical Neuroscience, Brain Science

More information

Information Theoretic Properties of Quantum Automata

Information Theoretic Properties of Quantum Automata Information Theoretic Properties of Quantum Automata D. Gier Department of Physics and Astronomy, The University of Kansas, Lawrence, KS 66045 and Complexity Sciences Center and Physics Department, University

More information

Toward a Better Understanding of Complexity

Toward a Better Understanding of Complexity Toward a Better Understanding of Complexity Definitions of Complexity, Cellular Automata as Models of Complexity, Random Boolean Networks Christian Jacob jacob@cpsc.ucalgary.ca Department of Computer Science

More information

SOME APPLICATIONS OF INFORMATION THEORY TO CELLULAR AUTOMATA

SOME APPLICATIONS OF INFORMATION THEORY TO CELLULAR AUTOMATA Physica 10D (1984) 45-51 North-Holland, Amsterdam SOME APPLICATIONS OF INFORMATION THEORY TO CELLULAR AUTOMATA t Michael S. WATERMAN Department of Mathematics and of Biological Sciences, University of

More information

Chaotic Properties of the Elementary Cellular Automaton Rule 40 in Wolfram s Class I

Chaotic Properties of the Elementary Cellular Automaton Rule 40 in Wolfram s Class I Chaotic Properties of the Elementary Cellular Automaton Rule 4 in Wolfram s Class I Fumio Ohi Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan This paper examines the chaotic

More information

Cellular Automata CS 591 Complex Adaptive Systems Spring Professor: Melanie Moses 2/02/09

Cellular Automata CS 591 Complex Adaptive Systems Spring Professor: Melanie Moses 2/02/09 Cellular Automata CS 591 Complex Adaptive Systems Spring 2009 Professor: Melanie Moses 2/02/09 Introduction to Cellular Automata (CA) Invented by John von Neumann (circa~1950). A cellular automata consists

More information

Equivalence of History and Generator ɛ-machines

Equivalence of History and Generator ɛ-machines MCS Codes: 37A50 37B10 60J10 Equivalence of History and Generator ɛ-machines Nicholas F. Travers 1, 2, 1, 2, 3, 4, and James P. Crutchfield 1 Complexity Sciences Center 2 Mathematics Department 3 Physics

More information

Chapter I: Fundamental Information Theory

Chapter I: Fundamental Information Theory ECE-S622/T62 Notes Chapter I: Fundamental Information Theory Ruifeng Zhang Dept. of Electrical & Computer Eng. Drexel University. Information Source Information is the outcome of some physical processes.

More information

Entropy and information estimation: An overview

Entropy and information estimation: An overview Entropy and information estimation: An overview Ilya Nemenman December 12, 2003 Ilya Nemenman, NIPS 03 Entropy Estimation Workshop, December 12, 2003 1 Workshop schedule: Morning 7:30 7:55 Ilya Nemenman,

More information

The Modeling and Complexity of Dynamical Systems by Means of Computation and Information Theories

The Modeling and Complexity of Dynamical Systems by Means of Computation and Information Theories Proceedings of the nd Central European Conference on Information and Intelligent Systems 8 The Modeling and Complexity of Dynamical Systems by Means of Computation and Information Theories Robert Logozar

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

STA 414/2104: Machine Learning

STA 414/2104: Machine Learning STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far

More information

Pattern Discovery in Time Series, Part II: Implementation, Evaluation, and Comparison

Pattern Discovery in Time Series, Part II: Implementation, Evaluation, and Comparison Pattern Discovery in Time Series, Part II: Implementation, Evaluation, and Comparison Kristina isa Shalizi klinkner@santafe.edu Physics Department, University of San Francisco, 23 Fulton Street, San Francisco,

More information

The projects listed on the following pages are suitable for MSc/MSci or PhD students. An MSc/MSci project normally requires a review of the

The projects listed on the following pages are suitable for MSc/MSci or PhD students. An MSc/MSci project normally requires a review of the The projects listed on the following pages are suitable for MSc/MSci or PhD students. An MSc/MSci project normally requires a review of the literature and finding recent related results in the existing

More information

Measures for information propagation in Boolean networks

Measures for information propagation in Boolean networks Physica D 227 (2007) 100 104 www.elsevier.com/locate/physd Measures for information propagation in Boolean networks Pauli Rämö a,, Stuart Kauffman b, Juha Kesseli a, Olli Yli-Harja a a Institute of Signal

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

However, in actual topology a distance function is not used to define open sets.

However, in actual topology a distance function is not used to define open sets. Chapter 10 Dimension Theory We are used to the notion of dimension from vector spaces: dimension of a vector space V is the minimum number of independent bases vectors needed to span V. Therefore, a point

More information

Words generated by cellular automata

Words generated by cellular automata Words generated by cellular automata Eric Rowland University of Waterloo (soon to be LaCIM) November 25, 2011 Eric Rowland (Waterloo) Words generated by cellular automata November 25, 2011 1 / 38 Outline

More information

The Effects of Coarse-Graining on One- Dimensional Cellular Automata Alec Boyd UC Davis Physics Deparment

The Effects of Coarse-Graining on One- Dimensional Cellular Automata Alec Boyd UC Davis Physics Deparment The Effects of Coarse-Graining on One- Dimensional Cellular Automata Alec Boyd UC Davis Physics Deparment alecboy@gmail.com Abstract: Measurement devices that we use to examine systems often do not communicate

More information

Free Energy, Fokker-Planck Equations, and Random walks on a Graph with Finite Vertices

Free Energy, Fokker-Planck Equations, and Random walks on a Graph with Finite Vertices Free Energy, Fokker-Planck Equations, and Random walks on a Graph with Finite Vertices Haomin Zhou Georgia Institute of Technology Jointly with S.-N. Chow (Georgia Tech) Wen Huang (USTC) Yao Li (NYU) Research

More information

Mechanisms of Chaos: Stable Instability

Mechanisms of Chaos: Stable Instability Mechanisms of Chaos: Stable Instability Reading for this lecture: NDAC, Sec. 2.-2.3, 9.3, and.5. Unpredictability: Orbit complicated: difficult to follow Repeatedly convergent and divergent Net amplification

More information

Computation at the Nanoscale:

Computation at the Nanoscale: Computation at the Nanoscale: thoughts on information processing in novel materials, molecules, & atoms Jim Crutchfield Complexity Sciences Center Physics Department University of California at Davis Telluride

More information

II. Spatial Systems A. Cellular Automata 8/24/08 1

II. Spatial Systems A. Cellular Automata 8/24/08 1 II. Spatial Systems A. Cellular Automata 8/24/08 1 Cellular Automata (CAs) Invented by von Neumann in 1940s to study reproduction He succeeded in constructing a self-reproducing CA Have been used as: massively

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Transmission of Information Spring 2006

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Transmission of Information Spring 2006 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.44 Transmission of Information Spring 2006 Homework 2 Solution name username April 4, 2006 Reading: Chapter

More information

Gradient Estimation for Attractor Networks

Gradient Estimation for Attractor Networks Gradient Estimation for Attractor Networks Thomas Flynn Department of Computer Science Graduate Center of CUNY July 2017 1 Outline Motivations Deterministic attractor networks Stochastic attractor networks

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

GWAS V: Gaussian processes

GWAS V: Gaussian processes GWAS V: Gaussian processes Dr. Oliver Stegle Christoh Lippert Prof. Dr. Karsten Borgwardt Max-Planck-Institutes Tübingen, Germany Tübingen Summer 2011 Oliver Stegle GWAS V: Gaussian processes Summer 2011

More information

Regularities unseen, randomness observed: Levels of entropy convergence

Regularities unseen, randomness observed: Levels of entropy convergence CHAOS VOUME 13, NUMBER 1 MARCH 2003 Regularities unseen, randomness observed: evels of entropy convergence James P. Crutchfield a) Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 David

More information

First and Last Name: 2. Correct The Mistake Determine whether these equations are false, and if so write the correct answer.

First and Last Name: 2. Correct The Mistake Determine whether these equations are false, and if so write the correct answer. . Correct The Mistake Determine whether these equations are false, and if so write the correct answer. ( x ( x (a ln + ln = ln(x (b e x e y = e xy (c (d d dx cos(4x = sin(4x 0 dx xe x = (a This is an incorrect

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking MADHAVA MATHEMATICS COMPETITION, December 05 Solutions and Scheme of Marking NB: Part I carries 0 marks, Part II carries 30 marks and Part III carries 50 marks Part I NB Each question in Part I carries

More information

Dynamical Systems and Deep Learning: Overview. Abbas Edalat

Dynamical Systems and Deep Learning: Overview. Abbas Edalat Dynamical Systems and Deep Learning: Overview Abbas Edalat Dynamical Systems The notion of a dynamical system includes the following: A phase or state space, which may be continuous, e.g. the real line,

More information

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16 EE539R: Problem Set 3 Assigned: 24/08/6, Due: 3/08/6. Cover and Thomas: Problem 2.30 (Maimum Entropy): Solution: We are required to maimize H(P X ) over all distributions P X on the non-negative integers

More information

Chaos, Complexity, and Inference (36-462)

Chaos, Complexity, and Inference (36-462) Chaos, Complexity, and Inference (36-462) Lecture 7: Information Theory Cosma Shalizi 3 February 2009 Entropy and Information Measuring randomness and dependence in bits The connection to statistics Long-run

More information

The integrating factor method (Sect. 1.1)

The integrating factor method (Sect. 1.1) The integrating factor method (Sect. 1.1) Overview of differential equations. Linear Ordinary Differential Equations. The integrating factor method. Constant coefficients. The Initial Value Problem. Overview

More information

The Fixed String of Elementary Cellular Automata

The Fixed String of Elementary Cellular Automata The Fixed String of Elementary Cellular Automata Jiang Zhisong Department of Mathematics East China University of Science and Technology Shanghai 200237, China zsjiang@ecust.edu.cn Qin Dakang School of

More information

II. Spatial Systems. A. Cellular Automata. Structure. Cellular Automata (CAs) Example: Conway s Game of Life. State Transition Rule

II. Spatial Systems. A. Cellular Automata. Structure. Cellular Automata (CAs) Example: Conway s Game of Life. State Transition Rule II. Spatial Systems A. Cellular Automata B. Pattern Formation C. Slime Mold D. Excitable Media A. Cellular Automata 1/18/17 1 1/18/17 2 Cellular Automata (CAs) Invented by von Neumann in 1940s to study

More information

Kolmogorov Complexity and Diophantine Approximation

Kolmogorov Complexity and Diophantine Approximation Kolmogorov Complexity and Diophantine Approximation Jan Reimann Institut für Informatik Universität Heidelberg Kolmogorov Complexity and Diophantine Approximation p. 1/24 Algorithmic Information Theory

More information

Asymptotic Normality of an Entropy Estimator with Exponentially Decaying Bias

Asymptotic Normality of an Entropy Estimator with Exponentially Decaying Bias Asymptotic Normality of an Entropy Estimator with Exponentially Decaying Bias Zhiyi Zhang Department of Mathematics and Statistics Uniersity of North Carolina at Charlotte Charlotte, NC 28223 Abstract

More information