Vertex Routing Models and Polyhomeostatic Optimization. Claudius Gros. Institute for Theoretical Physics Goethe University Frankfurt, Germany

Size: px
Start display at page:

Download "Vertex Routing Models and Polyhomeostatic Optimization. Claudius Gros. Institute for Theoretical Physics Goethe University Frankfurt, Germany"

Transcription

1 Vertex Routing Models and Polyhomeostatic Optimization Claudius Gros Institute for Theoretical Physics Goethe University Frankfurt, Germany 1

2 topics in complex system theory Vertex Routing Models modelling conserved information flow [Markovic & Gros, NJP 9] Polyhomeostatic Optimization a new paradigm for adaptive dynamical systems [Markovic & Gros, PRL 1] 2

3 vertex routing models motivations criticality in dynamical systems information routing in networks cognitive processing via transient-state dynamics criticality in dynamical systems conserved quantity polynomial scaling? K = 2 Kauffman network 3

4 random boolean networks Kauffman networks / NK-networks N: network size K: in-connectivity random boolean functions fixpoints and cycles 1 AND 1 1 OR OR 2 3 OR OR AND here: N = 3, K = 2 [Luque & Sole, ] 4

5 phase transitions in boolean networks 1 p=.9 8 CHAOS p=.79 6 p=.6 K 4 2 ORDER K: in-connectivity p: magnetization p [Luque & Sole, ] 5

6 life at the edge of chaos competition: daily survival evolutionary fitness 1 p=.9 gene regulation networks 8 CHAOS p=.79 basis of all living K 6 p= ORDER p frozen (regular) phase deterministic dynamics good for daily survival bad for evolutionary adaption chaotic phase irregular dynamics bad for daily survival good for evolutionary adaption [Kauffman 69] 6

7 critical boolean networks scaling at criticality (K = 2)? number of attractors/cycles gene regulations networks Kauffman 69: N cell differentiation Samuelsson & Troein 3: > O(N p ) (any p)... and for other critical dynamical systems? diffusive conserved 7

8 information routing vs. transmision information transmission vertex vertex phase space: number of vertices information routing link (incomming) link (outgoing) phase space: number of directed links transmission routing 8

9 routing dynamics random routing tables at every vertex quenched dynamics (fixed routing tables) 1 2 (1) (2) (4)... (2) (3) (4)... information centrality overlapping cyclic attractors 4 3 number of attractors passing through a given vertex 9

10 information centrality numerical simulations fully connected graphs I(c,N) N=1 N=3 N=5 N= /N c 5 [Markovic & Gros, NJP 9] democratic distribution of the information centrality 1

11 cycle-length distribution numerical simulations fully connected graphs N l (L,N) N=1 N=3 N=5 N=7 µ a N L median µ a (N) (half above/below) µ a (N) N 11

12 scaling of mean cycle length analytical & numerical fully connected graphs 1 σ qm ~N.95 mean ~N.81 median ~N N [Schuelein, Markovic & Gros, in prep] non-trivial exponent L N.81 L : average cycle length 12

13 criticality in vertex routing models non-trivial distribution of (cylic) attractors mean median N.81 N σ mean N.95 N.51 = N.3.81 = N.14 N l (L,N) N=1 N=3 N=5 N=7 µ a N fat tails L scale invariance fully connected graphs: scale-invariant Erdös-Rényi graphs: work in progrress 13

14 criticality in complex systems thermodynamic systems 2D Ising frozen critical chaotic critical thermodynamic systems are scale invariant and critical dynamical systems? NK-networks no? Ω 2 N vertex routing models yes? Ω N(N 1) 14

15 polyhomeostatic optimization homeostasis» keep in balance «a single scalar quantity... blood-sugar level hormonal levels body temperature... airplane velocity furnace temperature... polyhomeostasis multiple scalar quantities» keep in relative balance «15

16 allocation problems time allocation individual target distribution functions e.g. 8% working 2% socializing polyhomeostatic games goal achieve target distribution function 16

17 firing-rate distributions allocation of neural activities firing rate target» maximal information transmission «time Shannon (information) entropy H[p] = dy p(y)log p(y) firing-rate distribution p(y), dy p(y)=1 17

18 maximal information distribution maximal Shannon entropy H[p] no contraints p(y) const. given mean p µ (y) exp( y/µ), µ= y p(y)dy energy constraints target firing-rate distribution p µ (y) (polyhomeostasis) Kullback-Leibler divergence D(p, p µ ) = p(y) log ( ) p(y) dy p µ (y) asymmetric measure for the distance of two probability distribution functions 18

19 intrinsic plasticity distribution adaption of internal neural parameters input input strength distributions p(y) p µ (y) neural firing rate output via non-linear neural transfer function 19

20 stochastic adaption minimization of Kullback-Leibler divergence D a,b (p, p µ ) = p(y) log ( ) p(y) dy y(t) = p µ (y) 1 e ax(t 1) b + 1 rate-encoding neurons gain a threshold b/a output sigmoidal transfer function stochastic adaption rules input a 1/a+x ( 1 (2+λ)y+λy 2) b 1 (2+λ)y+λy 2 [Triesch, 5] 2

21 feed-forward polyhomeostasis.4.3 p(x).2.1 y p(y).2.1 target, µ=.28 one neuron input x.5 1 neural firing y x a, b p(x) : given input distribution p(y) : output distribution 21

22 autapse: self-coupled neuron parameters, output a(t)-4 b(t) y(t) x a, b y time [Markovic & Gros, PRL 1] polyhomeostatic optimization induces continuous, self-contained neural activity limiting cycle 22

23 network of polyhomeostatic neurons x i (t) = j i w i j y j (t) w i j =±1/ N 1 randomly A 1 average target activities µ = y(t) B y(t) self-organized chaos spontaneous intermittent bursting t N = 1 [Markovic & Gros, PRL 1] 23

24 polyhomeostatic optimization distribution of averaged neural activities p(y).2.1 target, µ=.5 network, N=1 p(y) [Markovic & Gros, PRL 1] target, µ=.28 network, N= firing rate y.5 1 firing rate y polyhomeostatic adaption dynamical system with local adaption rules adapting time-averaged statistics of local activities non-trivial phase diagram 24

25 phase diagram magnitude of average Kullback-Leibler divergence D fraction of excitatory links target mean activity D(intermittent bursting) > D(chaotic phase) N = 1 K = 1 25

26 intermittent route to chaos 1 y i (t).5 13 a i (t) λ(t) 2 N = 1, µ=.1, fully connected activity: transient attractors intermittent bursting internal parameters: polyhomeostatic adaption towards threshold Lyapunov exonent (global, maximal) time intermittency ˆ= punctuated equilibrium (evolution) 26

27 concepts and models in complex system theory complex system theory still an emergent field many models and paradigms yet to be formulated vertex routing models critical dynamical network democratic information centrality... polyhomeostatic optimization neural network / game theory / allocation problems non-trivial autonomous dynamics... 27

28 graduate level textbook Information theory and complexity Phase transitions and self-organized criticality Life at the edge of chaos and punctuated equilibrium Cognitive system theory and diffusive emotional control second edition fall 21 28

Complex systems maximizing information entropy - From neural activity to global human data production

Complex systems maximizing information entropy - From neural activity to global human data production Complex systems maximizing information entropy - From neural activity to global human data production Claudius Gros, Gregor Kaczor, Dimitrije Marković Institute for Theoretical Physics Goethe University

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

arxiv: v1 [q-bio.nc] 2 Oct 2014 Frontiers in Robotics and AI 1 INTRODUCTION

arxiv: v1 [q-bio.nc] 2 Oct 2014 Frontiers in Robotics and AI 1 INTRODUCTION Frontiers in Robotics and AI Research Article 3 October 24 Generating functionals for computational intelligence: The Fisher information as an objective function for self-limiting Hebbian learning rules

More information

Controlling chaos in random Boolean networks

Controlling chaos in random Boolean networks EUROPHYSICS LETTERS 20 March 1997 Europhys. Lett., 37 (9), pp. 597-602 (1997) Controlling chaos in random Boolean networks B. Luque and R. V. Solé Complex Systems Research Group, Departament de Fisica

More information

Index. 1/f noise, 166

Index. 1/f noise, 166 Index 1/f noise, 166 Absorbing phase transition, 173 absorbing state, 174 active state, 174 self-organized criticality, 174 time-scale seperation, 174 Abstract concept, 293 Action selection. See Decision

More information

Discrete and Indiscrete Models of Biological Networks

Discrete and Indiscrete Models of Biological Networks Discrete and Indiscrete Models of Biological Networks Winfried Just Ohio University November 17, 2010 Who are we? What are we doing here? Who are we? What are we doing here? A population of interacting

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

Measures for information propagation in Boolean networks

Measures for information propagation in Boolean networks Physica D 227 (2007) 100 104 www.elsevier.com/locate/physd Measures for information propagation in Boolean networks Pauli Rämö a,, Stuart Kauffman b, Juha Kesseli a, Olli Yli-Harja a a Institute of Signal

More information

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1 5.3 METABOLIC NETWORKS 193 5.3 Metabolic Networks 5.4 Bayesian Networks Let G = (V, E) be a directed acyclic graph. We assume that the vertices i V (1 i n) represent for example genes and correspond to

More information

Lecture 13: Introduction to Neural Networks

Lecture 13: Introduction to Neural Networks Lecture 13: Introduction to Neural Networks Instructor: Aditya Bhaskara Scribe: Dietrich Geisler CS 5966/6966: Theory of Machine Learning March 8 th, 2017 Abstract This is a short, two-line summary of

More information

Probabilistic Models in Theoretical Neuroscience

Probabilistic Models in Theoretical Neuroscience Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction

More information

On the Effect of Heterogeneity on the Dynamics and Performance of Dynamical Networks

On the Effect of Heterogeneity on the Dynamics and Performance of Dynamical Networks Portland State University PDXScholar Dissertations and Theses Dissertations and Theses Spring 1-1-212 On the Effect of Heterogeneity on the Dynamics and Performance of Dynamical Networks Alireza Goudarzi

More information

How to Quantitate a Markov Chain? Stochostic project 1

How to Quantitate a Markov Chain? Stochostic project 1 How to Quantitate a Markov Chain? Stochostic project 1 Chi-Ning,Chou Wei-chang,Lee PROFESSOR RAOUL NORMAND April 18, 2015 Abstract In this project, we want to quantitatively evaluate a Markov chain. In

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

Information in Biology

Information in Biology Information in Biology CRI - Centre de Recherches Interdisciplinaires, Paris May 2012 Information processing is an essential part of Life. Thinking about it in quantitative terms may is useful. 1 Living

More information

Synchronous state transition graph

Synchronous state transition graph Heike Siebert, FU Berlin, Molecular Networks WS10/11 2-1 Synchronous state transition graph (0, 2) (1, 2) vertex set X (state space) edges (x,f(x)) every state has only one successor attractors are fixed

More information

CS224W: Analysis of Networks Jure Leskovec, Stanford University

CS224W: Analysis of Networks Jure Leskovec, Stanford University CS224W: Analysis of Networks Jure Leskovec, Stanford University http://cs224w.stanford.edu 10/30/17 Jure Leskovec, Stanford CS224W: Social and Information Network Analysis, http://cs224w.stanford.edu 2

More information

5 Mutual Information and Channel Capacity

5 Mutual Information and Channel Capacity 5 Mutual Information and Channel Capacity In Section 2, we have seen the use of a quantity called entropy to measure the amount of randomness in a random variable. In this section, we introduce several

More information

An objective function for self-limiting neural plasticity rules.

An objective function for self-limiting neural plasticity rules. ESANN 5 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges (Belgium), -4 April 5, i6doc.com publ., ISBN 978-875874-8. An objective function

More information

CS 365 Introduction to Scientific Modeling Fall Semester, 2011 Review

CS 365 Introduction to Scientific Modeling Fall Semester, 2011 Review CS 365 Introduction to Scientific Modeling Fall Semester, 2011 Review Topics" What is a model?" Styles of modeling" How do we evaluate models?" Aggregate models vs. individual models." Cellular automata"

More information

Géza Ódor MTA-EK-MFA Budapest 16/01/2015 Rio de Janeiro

Géza Ódor MTA-EK-MFA Budapest 16/01/2015 Rio de Janeiro Griffiths phases, localization and burstyness in network models Géza Ódor MTA-EK-MFA Budapest 16/01/2015 Rio de Janeiro Partners: R. Juhász M. A. Munoz C. Castellano R. Pastor-Satorras Infocommunication

More information

Analysis of Neural Networks with Chaotic Dynamics

Analysis of Neural Networks with Chaotic Dynamics Chaos, Solitonr & Fructals Vol. 3, No. 2, pp. 133-139, 1993 Printed in Great Britain @60-0779/93$6.00 + 40 0 1993 Pergamon Press Ltd Analysis of Neural Networks with Chaotic Dynamics FRANCOIS CHAPEAU-BLONDEAU

More information

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 How Random Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 1 Center for Complex Systems Research and Department of Physics University

More information

2 One-dimensional models in discrete time

2 One-dimensional models in discrete time 2 One-dimensional models in discrete time So far, we have assumed that demographic events happen continuously over time and can thus be written as rates. For many biological species with overlapping generations

More information

Chaotic Neurodynamics for Autonomous Agents

Chaotic Neurodynamics for Autonomous Agents CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents Derek Harter Member, Robert Kozma Senior Member Division of Computer Science, University of Memphis, TN, USA Abstract

More information

Information Dynamics Foundations and Applications

Information Dynamics Foundations and Applications Gustavo Deco Bernd Schürmann Information Dynamics Foundations and Applications With 89 Illustrations Springer PREFACE vii CHAPTER 1 Introduction 1 CHAPTER 2 Dynamical Systems: An Overview 7 2.1 Deterministic

More information

Biology as Information Dynamics

Biology as Information Dynamics Biology as Information Dynamics John Baez Stanford Complexity Group April 20, 2017 What is life? Self-replicating information! Information about what? How to self-replicate! It is clear that biology has

More information

Contrasting measures of irreversibility in stochastic and deterministic dynamics

Contrasting measures of irreversibility in stochastic and deterministic dynamics Contrasting measures of irreversibility in stochastic and deterministic dynamics Ian Ford Department of Physics and Astronomy and London Centre for Nanotechnology UCL I J Ford, New J. Phys 17 (2015) 075017

More information

Dynamical Systems and Deep Learning: Overview. Abbas Edalat

Dynamical Systems and Deep Learning: Overview. Abbas Edalat Dynamical Systems and Deep Learning: Overview Abbas Edalat Dynamical Systems The notion of a dynamical system includes the following: A phase or state space, which may be continuous, e.g. the real line,

More information

CHAOS -SOME BASIC CONCEPTS

CHAOS -SOME BASIC CONCEPTS CHAOS -SOME BASIC CONCEPTS Anders Ekberg INTRODUCTION This report is my exam of the "Chaos-part" of the course STOCHASTIC VIBRATIONS. I m by no means any expert in the area and may well have misunderstood

More information

Exploring a Simple Discrete Model of Neuronal Networks

Exploring a Simple Discrete Model of Neuronal Networks Exploring a Simple Discrete Model of Neuronal Networks Winfried Just Ohio University Joint work with David Terman, Sungwoo Ahn,and Xueying Wang August 6, 2010 An ODE Model of Neuronal Networks by Terman

More information

Information in Biology

Information in Biology Lecture 3: Information in Biology Tsvi Tlusty, tsvi@unist.ac.kr Living information is carried by molecular channels Living systems I. Self-replicating information processors Environment II. III. Evolve

More information

Stochastic Wilson-Cowan equations for networks of excitatory and inhibitory neurons II

Stochastic Wilson-Cowan equations for networks of excitatory and inhibitory neurons II Stochastic Wilson-Cowan equations for networks of excitatory and inhibitory neurons II Jack Cowan Mathematics Department and Committee on Computational Neuroscience University of Chicago 1 A simple Markov

More information

Maximization of Multi - Information

Maximization of Multi - Information Maximization of Multi - Information Week of Doctoral Students 2007 Jozef Juríček http://www.adultpdf.com Academy of Sciences of the Czech Republic Created by Image To PDF trial version, Institute to remove

More information

Phase transitions in discrete structures

Phase transitions in discrete structures Phase transitions in discrete structures Amin Coja-Oghlan Goethe University Frankfurt Overview 1 The physics approach. [following Mézard, Montanari 09] Basics. Replica symmetry ( Belief Propagation ).

More information

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) CS5314 Randomized Algorithms Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) 1 Introduce two topics: De-randomize by conditional expectation provides a deterministic way to construct

More information

Complex Systems Methods 3. Statistical complexity of temporal sequences

Complex Systems Methods 3. Statistical complexity of temporal sequences Complex Systems Methods 3. Statistical complexity of temporal sequences Eckehard Olbrich MPI MiS Leipzig Potsdam WS 2007/08 Olbrich (Leipzig) 02.11.2007 1 / 24 Overview 1 Summary Kolmogorov sufficient

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Computational Methods for Nonlinear Systems

Computational Methods for Nonlinear Systems Computational Methods for Nonlinear Systems Cornell Physics 682 / CIS 629 James P. Sethna Christopher R. Myers Computational Methods for Nonlinear Systems Graduate computational science laboratory course

More information

Machine Learning Srihari. Information Theory. Sargur N. Srihari

Machine Learning Srihari. Information Theory. Sargur N. Srihari Information Theory Sargur N. Srihari 1 Topics 1. Entropy as an Information Measure 1. Discrete variable definition Relationship to Code Length 2. Continuous Variable Differential Entropy 2. Maximum Entropy

More information

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Learning Deep Architectures for AI. Part I - Vijay Chakilam Learning Deep Architectures for AI - Yoshua Bengio Part I - Vijay Chakilam Chapter 0: Preliminaries Neural Network Models The basic idea behind the neural network approach is to model the response as a

More information

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 2: Entropy and Mutual Information Chapter 2 outline Definitions Entropy Joint entropy, conditional entropy Relative entropy, mutual information Chain rules Jensen s inequality Log-sum inequality

More information

Outline of the Lecture. Background and Motivation. Basics of Information Theory: 1. Introduction. Markku Juntti. Course Overview

Outline of the Lecture. Background and Motivation. Basics of Information Theory: 1. Introduction. Markku Juntti. Course Overview : Markku Juntti Overview The basic ideas and concepts of information theory are introduced. Some historical notes are made and the overview of the course is given. Source The material is mainly based on

More information

Chapter 2 Review of Classical Information Theory

Chapter 2 Review of Classical Information Theory Chapter 2 Review of Classical Information Theory Abstract This chapter presents a review of the classical information theory which plays a crucial role in this thesis. We introduce the various types of

More information

Introduction to Random Boolean Networks

Introduction to Random Boolean Networks Introduction to Random Boolean Networks Carlos Gershenson Centrum Leo Apostel, Vrije Universiteit Brussel. Krijgskundestraat 33 B-1160 Brussel, Belgium cgershen@vub.ac.be http://homepages.vub.ac.be/ cgershen/rbn/tut

More information

Renormalization Group: non perturbative aspects and applications in statistical and solid state physics.

Renormalization Group: non perturbative aspects and applications in statistical and solid state physics. Renormalization Group: non perturbative aspects and applications in statistical and solid state physics. Bertrand Delamotte Saclay, march 3, 2009 Introduction Field theory: - infinitely many degrees of

More information

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations THREE DIMENSIONAL SYSTEMS Lecture 6: The Lorenz Equations 6. The Lorenz (1963) Equations The Lorenz equations were originally derived by Saltzman (1962) as a minimalist model of thermal convection in a

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 1: Introduction 19 Winter M106 Class: MWF 12:50-1:55 pm @ 200

More information

Computational Methods for Nonlinear Systems

Computational Methods for Nonlinear Systems Computational Methods for Nonlinear Systems Cornell Physics 682 / CIS 629 Chris Myers Computational Methods for Nonlinear Systems Graduate computational science laboratory course developed by Myers & Sethna

More information

Modeling and Predicting Chaotic Time Series

Modeling and Predicting Chaotic Time Series Chapter 14 Modeling and Predicting Chaotic Time Series To understand the behavior of a dynamical system in terms of some meaningful parameters we seek the appropriate mathematical model that captures the

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Lecture 1: Introduction, Entropy and ML estimation

Lecture 1: Introduction, Entropy and ML estimation 0-704: Information Processing and Learning Spring 202 Lecture : Introduction, Entropy and ML estimation Lecturer: Aarti Singh Scribes: Min Xu Disclaimer: These notes have not been subjected to the usual

More information

ECE 8803 Nonlinear Dynamics and Applications Spring Georgia Tech Lorraine

ECE 8803 Nonlinear Dynamics and Applications Spring Georgia Tech Lorraine ECE 8803 Nonlinear Dynamics and Applications Spring 2018 Georgia Tech Lorraine Brief Description Introduction to the nonlinear dynamics of continuous-time and discrete-time systems. Routes to chaos. Quantification

More information

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 3 Oct 2005

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 3 Oct 2005 Growing Directed Networks: Organization and Dynamics arxiv:cond-mat/0408391v2 [cond-mat.stat-mech] 3 Oct 2005 Baosheng Yuan, 1 Kan Chen, 1 and Bing-Hong Wang 1,2 1 Department of Computational cience, Faculty

More information

Phase Transitions in Physics and Computer Science. Cristopher Moore University of New Mexico and the Santa Fe Institute

Phase Transitions in Physics and Computer Science. Cristopher Moore University of New Mexico and the Santa Fe Institute Phase Transitions in Physics and Computer Science Cristopher Moore University of New Mexico and the Santa Fe Institute Magnetism When cold enough, Iron will stay magnetized, and even magnetize spontaneously

More information

EVOLUTION OF COMPLEX FOOD WEB STRUCTURE BASED ON MASS EXTINCTION

EVOLUTION OF COMPLEX FOOD WEB STRUCTURE BASED ON MASS EXTINCTION EVOLUTION OF COMPLEX FOOD WEB STRUCTURE BASED ON MASS EXTINCTION Kenichi Nakazato Nagoya University Graduate School of Human Informatics nakazato@create.human.nagoya-u.ac.jp Takaya Arita Nagoya University

More information

Gradient Estimation for Attractor Networks

Gradient Estimation for Attractor Networks Gradient Estimation for Attractor Networks Thomas Flynn Department of Computer Science Graduate Center of CUNY July 2017 1 Outline Motivations Deterministic attractor networks Stochastic attractor networks

More information

Online solution of the average cost Kullback-Leibler optimization problem

Online solution of the average cost Kullback-Leibler optimization problem Online solution of the average cost Kullback-Leibler optimization problem Joris Bierkens Radboud University Nijmegen j.bierkens@science.ru.nl Bert Kappen Radboud University Nijmegen b.kappen@science.ru.nl

More information

Lecture 3. Dynamical Systems in Continuous Time

Lecture 3. Dynamical Systems in Continuous Time Lecture 3. Dynamical Systems in Continuous Time University of British Columbia, Vancouver Yue-Xian Li November 2, 2017 1 3.1 Exponential growth and decay A Population With Generation Overlap Consider a

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

Weakly interacting particle systems on graphs: from dense to sparse

Weakly interacting particle systems on graphs: from dense to sparse Weakly interacting particle systems on graphs: from dense to sparse Ruoyu Wu University of Michigan (Based on joint works with Daniel Lacker and Kavita Ramanan) USC Math Finance Colloquium October 29,

More information

6.2 Brief review of fundamental concepts about chaotic systems

6.2 Brief review of fundamental concepts about chaotic systems 6.2 Brief review of fundamental concepts about chaotic systems Lorenz (1963) introduced a 3-variable model that is a prototypical example of chaos theory. These equations were derived as a simplification

More information

1 What makes a dynamical system computationally powerful?

1 What makes a dynamical system computationally powerful? 1 What makes a dynamical system computationally powerful? Robert Legenstein and Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz Austria, 8010 Graz {legi, maass}@igi.tugraz.at

More information

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception Takashi Kanamaru Department of Mechanical Science and ngineering, School of Advanced

More information

arxiv: v3 [q-bio.nc] 17 Oct 2018

arxiv: v3 [q-bio.nc] 17 Oct 2018 Evaluating performance of neural codes in model neural communication networks Chris G. Antonopoulos 1, Ezequiel Bianco-Martinez 2 and Murilo S. Baptista 3 October 18, 2018 arxiv:1709.08591v3 [q-bio.nc]

More information

An Introduction to Independent Components Analysis (ICA)

An Introduction to Independent Components Analysis (ICA) An Introduction to Independent Components Analysis (ICA) Anish R. Shah, CFA Northfield Information Services Anish@northinfo.com Newport Jun 6, 2008 1 Overview of Talk Review principal components Introduce

More information

Finite State Automata Resulting from Temporal Information Maximization

Finite State Automata Resulting from Temporal Information Maximization Finite State Automata Resulting from Temporal Information Maximization Thomas Wennekers Nihat Ay SFI WORKING PAPER: 2004-01-001 SFI Working Papers contain accounts of scientific work of the author(s) and

More information

The projects listed on the following pages are suitable for MSc/MSci or PhD students. An MSc/MSci project normally requires a review of the

The projects listed on the following pages are suitable for MSc/MSci or PhD students. An MSc/MSci project normally requires a review of the The projects listed on the following pages are suitable for MSc/MSci or PhD students. An MSc/MSci project normally requires a review of the literature and finding recent related results in the existing

More information

Chaotic diffusion in randomly perturbed dynamical systems

Chaotic diffusion in randomly perturbed dynamical systems Chaotic diffusion in randomly perturbed dynamical systems Rainer Klages Queen Mary University of London, School of Mathematical Sciences Max Planck Institute for Mathematics in the Sciences Leipzig, 30

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 622 - Section 2 - Spring 27 Pre-final Review Jan-Willem van de Meent Feedback Feedback https://goo.gl/er7eo8 (also posted on Piazza) Also, please fill out your TRACE evaluations!

More information

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction THE TEACHING OF MATHEMATICS 2006, Vol IX,, pp 2 A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY Zoran R Pop-Stojanović Abstract How to introduce the concept of the Markov Property in an elementary

More information

Discrete time dynamical systems (Review of rst part of Math 361, Winter 2001)

Discrete time dynamical systems (Review of rst part of Math 361, Winter 2001) Discrete time dynamical systems (Review of rst part of Math 36, Winter 2) Basic problem: x (t);; dynamic variables (e.g. population size of age class i at time t); dynamics given by a set of n equations

More information

Network models: dynamical growth and small world

Network models: dynamical growth and small world Network models: dynamical growth and small world Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics

More information

arxiv:cond-mat/ v2 [cond-mat.dis-nn] 11 Apr 2004

arxiv:cond-mat/ v2 [cond-mat.dis-nn] 11 Apr 2004 Delays, connection topology, and synchronization of coupled chaotic maps Fatihcan M. Atay and Jürgen Jost Max Planck Institute for Mathematics in the Sciences, Leipzig 0403, Germany arxiv:cond-mat/03277v2

More information

Solvable model for a dynamical quantum phase transition from fast to slow scrambling

Solvable model for a dynamical quantum phase transition from fast to slow scrambling Solvable model for a dynamical quantum phase transition from fast to slow scrambling Sumilan Banerjee Weizmann Institute of Science Designer Quantum Systems Out of Equilibrium, KITP November 17, 2016 Work

More information

Géza Ódor, MTA-EK-MFA Budapest Ronald Dickman, UFMG Brazil 08/04/2015

Géza Ódor, MTA-EK-MFA Budapest Ronald Dickman, UFMG Brazil 08/04/2015 Localization, Griffiths phases and burstyness in neural network models Géza Ódor, MTA-EK-MFA Budapest Ronald Dickman, UFMG Brazil 08/04/2015 Partners: Gergely Ódor R. Juhász Infocommunication technologies

More information

dynamical zeta functions: what, why and what are the good for?

dynamical zeta functions: what, why and what are the good for? dynamical zeta functions: what, why and what are the good for? Predrag Cvitanović Georgia Institute of Technology November 2 2011 life is intractable in physics, no problem is tractable I accept chaos

More information

Thermodynamic limit and phase transitions in non-cooperative games: some mean-field examples

Thermodynamic limit and phase transitions in non-cooperative games: some mean-field examples Thermodynamic limit and phase transitions in non-cooperative games: some mean-field examples Conference on the occasion of Giovanni Colombo and Franco Ra... https://events.math.unipd.it/oscgc18/ Optimization,

More information

A Model of Evolutionary Dynamics with Quasiperiodic Forcing

A Model of Evolutionary Dynamics with Quasiperiodic Forcing paper presented at Society for Experimental Mechanics (SEM) IMAC XXXIII Conference on Structural Dynamics February 2-5, 205, Orlando FL A Model of Evolutionary Dynamics with Quasiperiodic Forcing Elizabeth

More information

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information. L65 Dept. of Linguistics, Indiana University Fall 205 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission rate

More information

application to biology

application to biology This article was downloaded by: [Stony Brook University] On: 28 May 2015, At: 07:45 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

Dept. of Linguistics, Indiana University Fall 2015

Dept. of Linguistics, Indiana University Fall 2015 L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 28 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission

More information

Rapid Introduction to Machine Learning/ Deep Learning

Rapid Introduction to Machine Learning/ Deep Learning Rapid Introduction to Machine Learning/ Deep Learning Hyeong In Choi Seoul National University 1/24 Lecture 5b Markov random field (MRF) November 13, 2015 2/24 Table of contents 1 1. Objectives of Lecture

More information

INTRODUCTION TO INFORMATION THEORY

INTRODUCTION TO INFORMATION THEORY INTRODUCTION TO INFORMATION THEORY KRISTOFFER P. NIMARK These notes introduce the machinery of information theory which is a eld within applied mathematics. The material can be found in most textbooks

More information

Chaos. Dr. Dylan McNamara people.uncw.edu/mcnamarad

Chaos. Dr. Dylan McNamara people.uncw.edu/mcnamarad Chaos Dr. Dylan McNamara people.uncw.edu/mcnamarad Discovery of chaos Discovered in early 1960 s by Edward N. Lorenz (in a 3-D continuous-time model) Popularized in 1976 by Sir Robert M. May as an example

More information

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 8 Sep 1999

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 8 Sep 1999 BARI-TH 347/99 arxiv:cond-mat/9907149v2 [cond-mat.stat-mech] 8 Sep 1999 PHASE ORDERING IN CHAOTIC MAP LATTICES WITH CONSERVED DYNAMICS Leonardo Angelini, Mario Pellicoro, and Sebastiano Stramaglia Dipartimento

More information

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point?

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point? Engineering Letters, 5:, EL_5 Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point? Pilar Gómez-Gil Abstract This paper presents the advances of a research using a combination

More information

Motivation. Lecture 23. The Stochastic Neuron ( ) Properties of Logistic Sigmoid. Logistic Sigmoid With Varying T. Logistic Sigmoid T = 0.

Motivation. Lecture 23. The Stochastic Neuron ( ) Properties of Logistic Sigmoid. Logistic Sigmoid With Varying T. Logistic Sigmoid T = 0. Motivation Lecture 23 Idea: with low probability, go against the local field move up the energy surface make the wrong microdecision Potential value for optimization: escape from local optima Potential

More information

Reconstruction Deconstruction:

Reconstruction Deconstruction: Reconstruction Deconstruction: A Brief History of Building Models of Nonlinear Dynamical Systems Jim Crutchfield Center for Computational Science & Engineering Physics Department University of California,

More information

J. Marro Institute Carlos I, University of Granada. Net-Works 08, Pamplona, 9-11 June 2008

J. Marro  Institute Carlos I, University of Granada. Net-Works 08, Pamplona, 9-11 June 2008 J. Marro http://ergodic.ugr.es/jmarro Institute Carlos I, University of Granada Net-Works 08, Pamplona, 9-11 June 2008 Networks? mathematical objects Königsberg bridges + Euler graphs (s. XVIII),... models

More information

Lecture 4: Importance of Noise and Fluctuations

Lecture 4: Importance of Noise and Fluctuations Lecture 4: Importance of Noise and Fluctuations Jordi Soriano Fradera Dept. Física de la Matèria Condensada, Universitat de Barcelona UB Institute of Complex Systems September 2016 1. Noise in biological

More information

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it

More information

The condensation phase transition in random graph coloring

The condensation phase transition in random graph coloring The condensation phase transition in random graph coloring Victor Bapst Goethe University, Frankfurt Joint work with Amin Coja-Oghlan, Samuel Hetterich, Felicia Rassmann and Dan Vilenchik arxiv:1404.5513

More information

Learning in Games and the Topology of Dynamical Systems. Georgios Piliouras

Learning in Games and the Topology of Dynamical Systems. Georgios Piliouras Learning in Games and the Topology of Dynamical Systems Georgios Piliouras Christos Papadimitriou Ioannis Panageas Game Theory Basics A B A B 1, 1 0, 0 0, 0 2, 2 Encodes economic interactions between many

More information

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step. 2. Cellular automata, and the SIRS model In this Section we consider an important set of models used in computer simulations, which are called cellular automata (these are very similar to the so-called

More information

Signal Processing - Lecture 7

Signal Processing - Lecture 7 1 Introduction Signal Processing - Lecture 7 Fitting a function to a set of data gathered in time sequence can be viewed as signal processing or learning, and is an important topic in information theory.

More information

Sign the pledge. On my honor, I have neither given nor received unauthorized aid on this Exam : 11. a b c d e. 1. a b c d e. 2.

Sign the pledge. On my honor, I have neither given nor received unauthorized aid on this Exam : 11. a b c d e. 1. a b c d e. 2. Math 258 Name: Final Exam Instructor: May 7, 2 Section: Calculators are NOT allowed. Do not remove this answer page you will return the whole exam. You will be allowed 2 hours to do the test. You may leave

More information

Information Theory in Intelligent Decision Making

Information Theory in Intelligent Decision Making Information Theory in Intelligent Decision Making Adaptive Systems and Algorithms Research Groups School of Computer Science University of Hertfordshire, United Kingdom June 7, 2015 Information Theory

More information

Lagrangian Coherent Structures: applications to living systems. Bradly Alicea

Lagrangian Coherent Structures: applications to living systems. Bradly Alicea Lagrangian Coherent Structures: applications to living systems Bradly Alicea http://www.msu.edu/~aliceabr Introduction: basic scientific problem Q: How can we quantify emergent structures? Use Lagrangian

More information

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers Advances in Mathematical Physics Volume 2, Article ID 5978, 8 pages doi:.55/2/5978 Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers A. Bershadskii Physics Department, ICAR,

More information