Kantian metaphysics to mind-brain. The approach follows Bacon s investigative method

Similar documents
The World According to Wolfram

Discrete and Indiscrete Models of Biological Networks

September 16, 2004 The NEURON Book: Chapter 2

EEG- Signal Processing

TRACING THE ORIGIN OF LIFE

Probabilistic Models in Theoretical Neuroscience

ENERGY CONSERVATION The Fisrt Law of Thermodynamics and the Work/Kinetic-Energy Theorem

COMP304 Introduction to Neural Networks based on slides by:

Artificial Neural Network

A Biologically-Inspired Model for Recognition of Overlapped Patterns

Synfire Waves in Small Balanced Networks

Fundamentals of Computational Neuroscience 2e

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Yet Another Objection to Fading and Dancing Qualia (5th draft)

2.1 Radiant Flux We now take up the details of an operational definition of radiant flux. The heart of the definition we shall adopt consists of the

Exploring a Simple Discrete Model of Neuronal Networks

Propositional Logic. Fall () Propositional Logic Fall / 30

To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology

All-or-None Principle and Weakness of Hodgkin-Huxley Mathematical Model

BIOLOGY 111. CHAPTER 1: An Introduction to the Science of Life

Growth oscillations. LElTER TO THE EDITOR. Zheming Cheng and Robert Savit

Valiant s Neuroidal Model

CMSC 421: Neural Computation. Applications of Neural Networks

Introduction. Chapter What is this book about?

Supporting Online Material for

The central problem: what are the objects of geometry? Answer 1: Perceptible objects with shape. Answer 2: Abstractions, mere shapes.

Chapter One BASIC QUESTIONS

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

APPENDIX II. Laboratory Techniques in AEM. Microbial Ecology & Metabolism Principles of Toxicology. Microbial Physiology and Genetics II

Comparison Among Species

Relations in epidemiology-- the need for models

Canonical Neural Models 1

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling

arxiv:quant-ph/ v1 17 Oct 1995

Data Mining Part 5. Prediction

The performance expectation above was developed using the following elements from A Framework for K-12 Science Education: Disciplinary Core Ideas

Implementing Competitive Learning in a Quantum System

DEVS Simulation of Spiking Neural Networks

A Three-dimensional Physiologically Realistic Model of the Retina

Chapter 9: The Perceptron

Dynamic Modeling of Brain Activity

Self-organized Criticality and its implication to brain dynamics. Jaeseung Jeong, Ph.D Department of Bio and Brain Engineering, KAIST

FORMULATION OF THE LEARNING PROBLEM

Some Thoughts on Computation and Simulation in Cognitive Science

Mind Association. Oxford University Press and Mind Association are collaborating with JSTOR to digitize, preserve and extend access to Mind.

Biological Systems Modeling & Simulation. Konstantinos P. Michmizos, PhD

On the cognitive experiments to test quantum-like behaviour of mind

Machine Learning Techniques

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

Neural Networks and the Back-propagation Algorithm

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Data-Driven Network Neuroscience. Sarah Feldt Muldoon Mathematics, CDSE Program, Neuroscience Program DAAD, May 18, 2016

CISC 3250 Systems Neuroscience

Mid Year Project Report: Statistical models of visual neurons

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring

PULSE-COUPLED networks (PCNs) of integrate-and-fire

Artificial Neural Networks

How to do backpropagation in a brain

Max Planck, Nobel Prize in Physics and inventor of Quantum Mechanics said:

J1.2 Short-term wind forecasting at the Hong Kong International Airport by applying chaotic oscillatory-based neural network to LIDAR data

Machine Learning and Related Disciplines

Introduction to Machine Learning Fall 2017 Note 5. 1 Overview. 2 Metric

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Lecture 33 Carnap on Theory and Observation

80% of all excitatory synapses - at the dendritic spines.

Lecture 5: Logistic Regression. Neural Networks

Lecture 3a: The Origin of Variational Bayes

Artificial Neural Networks Examination, June 2005

Founding quantum theory on the basis of consciousness

Spin-Boson and Spin-Fermion Topological Model of Consciousness

Functional Connectivity and Network Methods

Short Term Memory and Pattern Matching with Simple Echo State Networks

Lecture 01: Mathematical Modeling and Physics

Section Objectives: Recognize some possible benefits from studying biology. Summarize the characteristics of living things.

Variational Autoencoders

Computational Tasks and Models

Introduction. Introductory Remarks

arxiv: v1 [cs.ds] 3 Feb 2018

Introducing Proof 1. hsn.uk.net. Contents

Response Selection Using Neural Phase Oscillators

A Catalog of Elemental Neural Circuits

Linear algebra for MATH2601: Theory

Applied Thermodynamics for Marine Systems Prof. P. K. Das Department of Mechanical Engineering Indian Institute of Technology, Kharagpur

The de Broglie hypothesis for the wave nature of matter was to simply re-write the equations describing light's dual nature: Elight = hf

Today. Reading Quiz Lecture/Discussion Physics: The fundamental science Debrief activity from yesterday

Understanding Aha! Moments

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Introduction Biologically Motivated Crude Model Backpropagation

Figure Figure

ACTIVITY 5. Figure 5-1: Simulated electron interference pattern with your prediction for the next electron s position.

Model of a Biological Neuron as a Temporal Neural Network

AIBO experiences change of surface incline.

Mathematical Habits of Mind: An Organizing Principle for Curriculum Design

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Lecture 2: Linear regression

BECOMING FAMILIAR WITH SOCIAL NETWORKS

Transcription:

82 Basic Tools and Techniques As discussed, the project is based on mental physics which in turn is the application of Kantian metaphysics to mind-brain. The approach follows Bacon s investigative method (Mental Physics, pp.41) and employs set-membership theory (SMT) [Combettes, 1992]. The basic essence of SMT is that a problem may have more than one mathematical solution (> 1 satisfactory point) such that any one of them when picked makes no practical difference. The Slepian two-world model of facet-a and facet-b for principal quantities discussed earlier is an example of SMT. In addition to these the tools for constructing the neural network are embedding field theory and method of minimal anatomies. They are both interlinked techniques and are discussed separately below. Embedding Field Theory (EFT) Theory of embedding field developed by Stephen Grossberg [Grossberg, 1969, 1971] is a general theory of neural networks. This is derived systematically as follows. Simple psychological postulates (familiar behavioral facts) are used to derive neural networks which are then analyzed for psychological consequences. Psychological predictions are then used to derive a learning theory within the theory of embedding fields (EFT). Mathematically, this means that EFT is the neural network or machine described by cross-correlated flows on signed (+ or excitatory, or inhibitory) directed networks. The flows obey systems of non-linear differential equations (NLDE). Thus, deriving EFT from simple psychological postulates means derivation of NLDE.

83 Postulate-1: a letter is never decomposed into two parts. Postulate-2: a large time spacing (w) between letter presentations yields a different behavior from that of small time spacing, i.e., times at which letters are presented to the network must be represented within it. a single state correspond to i th letter. one-ness of i th letter with associated with one function. the network may not know list of letters but as learning for i th letter begins network knows the i th letter. & x j (t) describes relevance of i th & j th letters at time t in the network. x j (t) Postulate-3: daily experiences manifest continuous aspects, hence all macroscopic theories are cast in continuous setting. Postulate-4: presentation of letters evokes perturbation within the system. functions of time vary gradually in time. if time spacing (w) is large, influence of i th letter on predicting j th letter wears off. if i th letter is not presented, nothing occurs in, = 0. if i th letter is presented, something occurs in, is perturbed from 0. if i th letter is no longer presented, return to nothing, = 0. presentation of ith letter at respective times corresponds to sequence of perturbations. t 1 t 2 Figure 4.1. Postulates 1 to 4 and resulting derivations for the simple network. With each additional postulate the derived model acquired more capabilities. The meaning implications noted in this figure have objectively valid expression in mental physics.

84 Postulate-5: system adapts with as little bias as possible to its environment. Postulate-6: after practicing i th then j th letters, given i th letter system predicts j th letter. system of s are as linear as possible. after i th & j th learned letters, given i th letter, internal mechanism of network causes perturbation of x j (t) with lag w. (simples linear & continuous way to write) x j (t) t 1 t 2 where, t 1 t 2 (simples linear & continuous way to write above plot) where, Postulate-7: dynamics of predicting j th letter is localized. Postulate-8: since i th letter alone does not yield learning, the system must have a process of distinguishing i th from j th letter. influence of on (via ) is along a particular directed pathway And learning process is like eliminating incorrect path. i th and then j th letter with w spacing is represented in a process z ij that computes past x i (t-w) & present x j (t) values.. x j (t) is not same as x j (t) (simples linear & continuous way to write) where, practice i th then j th letters v k v k Figure 4.2. Postulates 5 to 8 and resulting derivations for the simple network. Note that this is a continuation of figure 4.1.

85 Postulate-9: the act of practicing and the distinguishing process must work together for system to predict j th letter. Postulate-10: system can predict j th letter regardless of given letters (i or k) assuming its combination with j th letter has been practiced. x j(t) becomes larger only if x i(t-w) & z ij are large. i th & j th letters can be learned without interference from k th letter. (simples linear & continuous way to write) (simples linear & continuous way to write) Postulate-11: system can predict j th letter in complex learning situations with background noise caused by excitatory & inhibitory interactions. signal threshold to keep background noise. z ij (t) x j (t) Figure 4.3. Postulates 9 to 11 and resulting derivations for the simple network. This is a continuation of figure 4.1. The non-linear differential equation (NLDE) of the finally derived simple network is the two equations resulting after postulate-11. Figures 4.1, 4.2 and 4.3 show a step-by-step derivation of a simple network with the capability to learn and therefore exhibit learned behaviors. The specific example illustrated is Grossberg s letter recognition problem.

86 Note that though simple behavioral facts and hence empirical terminology are used to derive the network (NLDE), the NLDE variables are secondary quantities. Thus mathematical variables are not necessarily connected to empirical interpretations. However, if the output variable is defined as representing principal quantities empirical connections can be made. (1) Perturb the system (3) Observe system for neuro-physiological interpretation Derived NLDE (neural-network) (2) Generalize the system (4) Refine system w.r.t spatial & temporal scales Figure 4.4. Four possible paths after deriving non-linear differential equation (NLDE). After deriving the derived NLDE (i.e., neural network) there are at least four possible paths for analyzing it (Fig.4.4). Let us we consider system (neural network) input and output as representation of system experience and perturbation cases as representation of mathematical behavior which is qualitatively same as psychological interpretation. Then perturbing the system (Fig.4.4(1)) complicates experiences and provokes various perturbation cases. The system may also be generalized (Fig.4.4(2)). This implies associating the system to farthest possible psychological experience. Thus, to generalize the system is to investigate the system as pure mathematical problem.

87 Another path is to observe system for neurophysiological interpretation (Fig.4.4(3)). However one must be careful taking this path and not make leaping interpretations because the system is derived from psychological experiences and does not incorporate microscopic neurophysiology. Finally, the system may be refined with respect to spatial and temporal scales of the NLDE (Fig.4.4(4)). According to Grossberg, using this approach the system can make hypothesis on possible transients [Grossberg, 1971]. The discussed approaches are summarized in figure 4.5. In this project, the derived system will be perturbed. This is because qualitative interpretation of the math and hence the psychology of complex (unrelated and related) cases are manifestations of the simple behavioral facts used for the derivation. (1) Perturbing the system Embedding field (neural network) shows:- Complex psychological phenomena are manifestations of few simpler behavioral facts. (3) Observe for neuro-physiological interpretation Embedding field (neural network) shows:- Sometimes, qualitative & quantitative agreement w.r.t neuro-physiology. Other times, makes new predictions. (2) Generalizing the system Embedding field (neural network) shows:- (4) Refining spatial & temporal scales of NLDE Embedding field (neural network) shows:- Large class of NLDE for mathematical analysis is introduced. Relations among alternate mathematical possibilities. Clarity of model scale. Figure 4.5. Accomplishments of embedding field theory (EFT) depending on the paths taken (Fig.4.4).

88 Method of minimal anatomies (MMA) Stephen Grossberg introduced MMA as a method of successive approximations [Grossberg, 1982]. Its principle thus has analogies in other scientific fields such as physics, (1-body to 2-body to 3-body and so on) or from thermodynamics to statistical mechanics. Thus with each successive analysis the complexity increases and is more realistic. This is consistent with Bacon s investigative method. Grossberg s MMA answers the question of How to approach several levels of description relevant to understanding behavior? The embedding field type neural component is first derived from psychological postulates (Fig.4.6). This then forms the neural unit when interconnected to form a specific anatomy. The specific anatomy or system receives input with psychological interpretation resulting in psychologically interpretable output. Psychological Postulates derive Interconnection of neural-units (specific anatomies) Laws of neural components i.e., neural-unit INPUT SYSTEM OUTPUT Figure 4.6. Basic description of applying embedding field theory (EFT) to derive a neural network.

89 It should be highlighted that though the derived components are called neural-units they are not the same as a biologists understanding of neural components. This is because the derivation is from psychological postulates and hence modelling scale (neural network) is much larger than individual neurons. Also, regardless of how precise the knowledge of laws and postulates may be, accounting for every (10 12 nerves) connections and analyzing them would be impractical. The above description (Fig.4.6) is basically an application of MMA. A model (network) may be considered the minimal network. For instance the derived network in figure 5.6 can be a minimal network. Other steps will build upon this minimal network resulting in a network which may then be considered minimal network for a larger model continuing this cycle to beget an even larger model. The general MMA approach may be broken down to four basic steps (Fig.4.7). First a model is picked for considering the minimal network. This is then followed by analysis-step (Step 2) which helps in understanding the mechanism of a particular minimal anatomy. In addition, at this step advantages and disadvantages of its variations can also be imagined further enhancing the understanding. With every additional postulate (Step 4) the corresponding network will be the previous minimal network with added components for the additional capabilities. Thus, the resultant network will be a hierarchy of networks with sophisticated postulates. The mechanisms of each successive minimal network means that the resultant network will have a catalog of mechanisms for use in various situations.

90 Step 1: (Derivation Step) For specific psychological postulate, Derive minimal network Step 2: (Analysis Step) Analyze psychological and neural capabilities. Step 3: (Naming Step) For the formal property (observed capabilities) of minimal network, Name the minimal network by familiar experimental names. Step 4: (New Postulate Step) New psychological postulate to account for:- What the network can t do? Figure 4.7. A scheme of the method of minimal anatomies approach (MMA) to design a neural network with embedding field type minimal networks.

91 The naming-step (Step 3) answers the question of When does a particular phenomenon first appear? Therefore, this step, regardless of the name, will not compromise the formal correctness of the overall theory from the resultant network. The naming-step is a very useful tool for deeper insight into the resultant neural network. Finally, one must be wary and to distinguish between postulates, mathematical properties, factual data and interpretation of network variables. (a) (b) Figure 4.8. Different but similarly meaning description for neuroscience as an interdisciplinary science. Figure (a) is the neuroscience roadmap view linking biology and psychology. The highlighted region indicates the location of this thesis with regards to the neuroscience roadmap. A map model is a network of neural networks while a network of maps comprises a network system [Wells, Ch.7, 2010]. Figure (b) is another view depicting the neuroscience ladder adopted from [Wells, 2011a] showing several rungs each representing scientific construct at various levels. Moving upward towards increasing level of abstraction from mechanism to behavior is model-order reduction (MOR). However, migration of scientific study from level of observable phenomenon down to increasingly refined scientific constructs is scientific reduction (SR).

92 Modelling Scale The above description of MMA involved using EFT to build the network. However, the general idea of MMA may be used to build networks at smaller scales such as pulse-coded neural networks [Sharma, 2011]. Modelling scales for neural networks will be briefly discussed. If we consider respective disciplines as rungs of a ladder (Fig.4.8), the objective of a neuroscientist should be to join the rungs for achieving the common goal of understanding how mind-brain works. Taking a concept from systems theory, this can be done on two accounts: model-order reduction (MOR) and scientific reduction (SR). MOR is the simplification of the amount of detail needed in obtaining computationally tractable models of ever more complex systems. Thus with MOR one moves towards increasing level of abstraction from mechanism to behavior. SR on the other hand is migrating scientific study and theory from one level of phenomena to others more directly observable by our senses but which use increasingly refined scientific constructs. For the thesis, the modelling direction will be SR using MMA to build embedding field type minimal-anatomies neural networks to construct a map models.