Vertex Routing Models and Polyhomeostatic Optimization. Claudius Gros. Institute for Theoretical Physics Goethe University Frankfurt, Germany

Similar documents
Complex systems maximizing information entropy - From neural activity to global human data production

ECE521 Lectures 9 Fully Connected Neural Networks

arxiv: v1 [q-bio.nc] 2 Oct 2014 Frontiers in Robotics and AI 1 INTRODUCTION

Controlling chaos in random Boolean networks

Index. 1/f noise, 166

Discrete and Indiscrete Models of Biological Networks

Several ways to solve the MSO problem

Measures for information propagation in Boolean networks

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1

Lecture 13: Introduction to Neural Networks

Probabilistic Models in Theoretical Neuroscience

On the Effect of Heterogeneity on the Dynamics and Performance of Dynamical Networks

How to Quantitate a Markov Chain? Stochostic project 1

An Introductory Course in Computational Neuroscience

Information in Biology

Synchronous state transition graph

CS224W: Analysis of Networks Jure Leskovec, Stanford University

5 Mutual Information and Channel Capacity

An objective function for self-limiting neural plasticity rules.

CS 365 Introduction to Scientific Modeling Fall Semester, 2011 Review

Géza Ódor MTA-EK-MFA Budapest 16/01/2015 Rio de Janeiro

Analysis of Neural Networks with Chaotic Dynamics

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2

2 One-dimensional models in discrete time

Chaotic Neurodynamics for Autonomous Agents

Information Dynamics Foundations and Applications

Biology as Information Dynamics

Contrasting measures of irreversibility in stochastic and deterministic dynamics

Dynamical Systems and Deep Learning: Overview. Abbas Edalat

CHAOS -SOME BASIC CONCEPTS

Exploring a Simple Discrete Model of Neuronal Networks

Information in Biology

Stochastic Wilson-Cowan equations for networks of excitatory and inhibitory neurons II

Maximization of Multi - Information

Phase transitions in discrete structures

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)

Complex Systems Methods 3. Statistical complexity of temporal sequences

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Computational Methods for Nonlinear Systems

Machine Learning Srihari. Information Theory. Sargur N. Srihari

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Outline of the Lecture. Background and Motivation. Basics of Information Theory: 1. Introduction. Markku Juntti. Course Overview

Chapter 2 Review of Classical Information Theory

Introduction to Random Boolean Networks

Renormalization Group: non perturbative aspects and applications in statistical and solid state physics.

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction

Computational Methods for Nonlinear Systems

Modeling and Predicting Chaotic Time Series

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Introduction, Entropy and ML estimation

ECE 8803 Nonlinear Dynamics and Applications Spring Georgia Tech Lorraine

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 3 Oct 2005

Phase Transitions in Physics and Computer Science. Cristopher Moore University of New Mexico and the Santa Fe Institute

EVOLUTION OF COMPLEX FOOD WEB STRUCTURE BASED ON MASS EXTINCTION

Gradient Estimation for Attractor Networks

Online solution of the average cost Kullback-Leibler optimization problem

Lecture 3. Dynamical Systems in Continuous Time

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Weakly interacting particle systems on graphs: from dense to sparse

6.2 Brief review of fundamental concepts about chaotic systems

1 What makes a dynamical system computationally powerful?

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

arxiv: v3 [q-bio.nc] 17 Oct 2018

An Introduction to Independent Components Analysis (ICA)

Finite State Automata Resulting from Temporal Information Maximization

The projects listed on the following pages are suitable for MSc/MSci or PhD students. An MSc/MSci project normally requires a review of the

Chaotic diffusion in randomly perturbed dynamical systems

Data Mining Techniques

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction

Discrete time dynamical systems (Review of rst part of Math 361, Winter 2001)

Network models: dynamical growth and small world

arxiv:cond-mat/ v2 [cond-mat.dis-nn] 11 Apr 2004

Solvable model for a dynamical quantum phase transition from fast to slow scrambling

Géza Ódor, MTA-EK-MFA Budapest Ronald Dickman, UFMG Brazil 08/04/2015

dynamical zeta functions: what, why and what are the good for?

Thermodynamic limit and phase transitions in non-cooperative games: some mean-field examples

A Model of Evolutionary Dynamics with Quasiperiodic Forcing

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

application to biology

Dept. of Linguistics, Indiana University Fall 2015

Rapid Introduction to Machine Learning/ Deep Learning

INTRODUCTION TO INFORMATION THEORY

Chaos. Dr. Dylan McNamara people.uncw.edu/mcnamarad

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 8 Sep 1999

Long-Term Prediction, Chaos and Artificial Neural Networks. Where is the Meeting Point?

Motivation. Lecture 23. The Stochastic Neuron ( ) Properties of Logistic Sigmoid. Logistic Sigmoid With Varying T. Logistic Sigmoid T = 0.

Reconstruction Deconstruction:

J. Marro Institute Carlos I, University of Granada. Net-Works 08, Pamplona, 9-11 June 2008

Lecture 4: Importance of Noise and Fluctuations

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

The condensation phase transition in random graph coloring

Learning in Games and the Topology of Dynamical Systems. Georgios Piliouras

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.

Signal Processing - Lecture 7

Sign the pledge. On my honor, I have neither given nor received unauthorized aid on this Exam : 11. a b c d e. 1. a b c d e. 2.

Information Theory in Intelligent Decision Making

Lagrangian Coherent Structures: applications to living systems. Bradly Alicea

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers

Transcription:

Vertex Routing Models and Polyhomeostatic Optimization Claudius Gros Institute for Theoretical Physics Goethe University Frankfurt, Germany 1

topics in complex system theory Vertex Routing Models modelling conserved information flow [Markovic & Gros, NJP 9] Polyhomeostatic Optimization a new paradigm for adaptive dynamical systems [Markovic & Gros, PRL 1] 2

vertex routing models motivations criticality in dynamical systems information routing in networks cognitive processing via transient-state dynamics criticality in dynamical systems conserved quantity polynomial scaling? K = 2 Kauffman network 3

random boolean networks Kauffman networks / NK-networks N: network size K: in-connectivity random boolean functions fixpoints and cycles 1 AND 1 1 OR OR 2 3 OR 1 2 3 1 1 1 1 1 1 1 OR 1 3 2 1 1 1 1 1 1 1 AND 2 3 1 1 1 1 1 1 11 1 11 11 111 here: N = 3, K = 2 [Luque & Sole, ] 4

phase transitions in boolean networks 1 p=.9 8 CHAOS p=.79 6 p=.6 K 4 2 ORDER K: in-connectivity p: magnetization.5.6.7.8.9 1. p [Luque & Sole, ] 5

life at the edge of chaos competition: daily survival evolutionary fitness 1 p=.9 gene regulation networks 8 CHAOS p=.79 basis of all living K 6 p=.6 4 2 ORDER.5.6.7.8.9 1. p frozen (regular) phase deterministic dynamics good for daily survival bad for evolutionary adaption chaotic phase irregular dynamics bad for daily survival good for evolutionary adaption [Kauffman 69] 6

critical boolean networks scaling at criticality (K = 2)? number of attractors/cycles gene regulations networks Kauffman 69: N cell differentiation Samuelsson & Troein 3: > O(N p ) (any p)... and for other critical dynamical systems? 11 11 111 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 111 11 11 111 11 11 11 11 111 11 11 11 11 11 11 11 111 11 11 diffusive 11 11 11 11 conserved 7

information routing vs. transmision information transmission vertex vertex phase space: number of vertices information routing link (incomming) link (outgoing) phase space: number of directed links 11 111 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 111 11 11 111 11 11 11 11 111 11 11 11 11 11 11 11 11 111 11 11 transmission 11 11 11 11 11 routing 8

routing dynamics random routing tables at every vertex quenched dynamics (fixed routing tables) 1 2 (1) (2) (4)... (2) (3) (4)... information centrality overlapping cyclic attractors 4 3 number of attractors passing through a given vertex 9

information centrality numerical simulations fully connected graphs I(c,N).5.4.3.2.1 N=1 N=3 N=5 N=7.6.5.4.3.2.1.5.1.15.2 1/N 1 2 3 4 c 5 [Markovic & Gros, NJP 9] democratic distribution of the information centrality 1

cycle-length distribution numerical simulations fully connected graphs N l (L,N).25.2.15.1 N=1 N=3 N=5 N=7 µ a 15 1 5 1 2 3 4 5 6 7 N.5 5 1 15 L 2 25 3 median µ a (N) (half above/below) µ a (N) N 11

scaling of mean cycle length analytical & numerical fully connected graphs 1 σ qm ~N.95 mean ~N.81 median ~N.51 1 1 1 1 N [Schuelein, Markovic & Gros, in prep] non-trivial exponent L N.81 L : average cycle length 12

criticality in vertex routing models non-trivial distribution of (cylic) attractors mean median N.81 N σ mean N.95 N.51 = N.3.81 = N.14 N l (L,N).25.2.15.1.5 N=1 N=3 N=5 N=7 µ a 15 1 5 1 2 3 4 5 6 7 N fat tails 5 1 15 L 2 25 3 scale invariance fully connected graphs: scale-invariant Erdös-Rényi graphs: work in progrress 13

criticality in complex systems thermodynamic systems 2D Ising frozen critical chaotic critical thermodynamic systems are scale invariant and critical dynamical systems? NK-networks no? Ω 2 N vertex routing models yes? Ω N(N 1) 14

polyhomeostatic optimization homeostasis» keep in balance «a single scalar quantity... blood-sugar level hormonal levels body temperature... airplane velocity furnace temperature... polyhomeostasis multiple scalar quantities» keep in relative balance «15

allocation problems time allocation individual target distribution functions e.g. 8% working 2% socializing polyhomeostatic games goal achieve target distribution function 16

firing-rate distributions allocation of neural activities firing rate target» maximal information transmission «time Shannon (information) entropy H[p] = dy p(y)log p(y) firing-rate distribution p(y), dy p(y)=1 17

maximal information distribution maximal Shannon entropy H[p] no contraints p(y) const. given mean p µ (y) exp( y/µ), µ= y p(y)dy energy constraints target firing-rate distribution p µ (y) (polyhomeostasis) Kullback-Leibler divergence D(p, p µ ) = p(y) log ( ) p(y) dy p µ (y) asymmetric measure for the distance of two probability distribution functions 18

intrinsic plasticity distribution adaption of internal neural parameters input input strength distributions p(y) p µ (y) neural firing rate output via non-linear neural transfer function 19

stochastic adaption minimization of Kullback-Leibler divergence D a,b (p, p µ ) = p(y) log ( ) p(y) dy y(t) = p µ (y) 1 e ax(t 1) b + 1 rate-encoding neurons gain a threshold b/a output sigmoidal transfer function stochastic adaption rules input a 1/a+x ( 1 (2+λ)y+λy 2) b 1 (2+λ)y+λy 2 [Triesch, 5] 2

feed-forward polyhomeostasis.4.3 p(x).2.1 y p(y).2.1 target, µ=.28 one neuron -1 -.5.5 1 input x.5 1 neural firing y x a, b p(x) : given input distribution p(y) : output distribution 21

autapse: self-coupled neuron parameters, output 3 2 1 a(t)-4 b(t) y(t) x a, b y 1 2 3 time [Markovic & Gros, PRL 1] polyhomeostatic optimization induces continuous, self-contained neural activity limiting cycle 22

network of polyhomeostatic neurons x i (t) = j i w i j y j (t) w i j =±1/ N 1 randomly A 1 average target activities µ =.28.15 y(t) B y(t).5 49 495 5 1.8.6.4.2 5 55 6 self-organized chaos spontaneous intermittent bursting t N = 1 [Markovic & Gros, PRL 1] 23

polyhomeostatic optimization distribution of averaged neural activities p(y).2.1 target, µ=.5 network, N=1 p(y).4.3.2 [Markovic & Gros, PRL 1] target, µ=.28 network, N=1.1.5 1 firing rate y.5 1 firing rate y polyhomeostatic adaption dynamical system with local adaption rules adapting time-averaged statistics of local activities non-trivial phase diagram 24

phase diagram magnitude of average Kullback-Leibler divergence D fraction of excitatory links target mean activity D(intermittent bursting) > D(chaotic phase) N = 1 K = 1 25

intermittent route to chaos 1 y i (t).5 13 a i (t) 12 11 4 λ(t) 2 N = 1, µ=.1, fully connected activity: transient attractors intermittent bursting internal parameters: polyhomeostatic adaption towards threshold Lyapunov exonent (global, maximal) 1 2 3 4 time intermittency ˆ= punctuated equilibrium (evolution) 26

concepts and models in complex system theory complex system theory still an emergent field many models and paradigms yet to be formulated vertex routing models critical dynamical network democratic information centrality... polyhomeostatic optimization neural network / game theory / allocation problems non-trivial autonomous dynamics... 27

graduate level textbook Information theory and complexity Phase transitions and self-organized criticality Life at the edge of chaos and punctuated equilibrium Cognitive system theory and diffusive emotional control second edition fall 21 28