The World According to Wolfram

Similar documents
Creative Objectivism, a powerful alternative to Constructivism

Time-bounded computations

Some Thoughts on Computation and Simulation in Cognitive Science

Investigation of Rule 73 as a Case Study of Class 4 Long-Distance Cellular Automata. Lucas Kang

Complex Systems Theory

Hestenes lectures, Part 5. Summer 1997 at ASU to 50 teachers in their 3 rd Modeling Workshop

Computability Theory

The Future of Computation

UNIT-VIII COMPUTABILITY THEORY

Propositional Logic. Fall () Propositional Logic Fall / 30

via Topos Theory Olivia Caramello University of Cambridge The unification of Mathematics via Topos Theory Olivia Caramello

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: NP-Completeness I Date: 11/13/18

The purpose here is to classify computational problems according to their complexity. For that purpose we need first to agree on a computational

Computational Tasks and Models

UMASS AMHERST MATH 300 SP 05, F. HAJIR HOMEWORK 8: (EQUIVALENCE) RELATIONS AND PARTITIONS

Sequence convergence, the weak T-axioms, and first countability

Umans Complexity Theory Lectures

Church s undecidability result

Notes on Complexity Theory Last updated: December, Lecture 2

Shadows of the Mind. A Search for the Missing Science of Consciousness ROGER PENROSE. Rouse Ball Professor of Mathematics University of Oxford

An Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky

Lecture 33 Carnap on Theory and Observation

A Summary of Economic Methodology

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

September 16, 2004 The NEURON Book: Chapter 2

Discriminative Direction for Kernel Classifiers

A General Overview of Parametric Estimation and Inference Techniques.

Superposition - World of Color and Hardness

Turing Machines Part III

Kantian metaphysics to mind-brain. The approach follows Bacon s investigative method

Introduction to Logic and Axiomatic Set Theory

Shannon Information (very briefly!) Lecture 4. Maximum and Minimum Entropy. Entropy. Entropy of Transition Rules. Entropy Examples

MITOCW watch?v=vjzv6wjttnc

CIS 2033 Lecture 5, Fall

The Turing machine model of computation

Kant and Finitism. Crossing Worlds: Mathematical logisc, philosophy, art: For Juliette Kennedy. Bill Tait. 4 June 2016

II. Spatial Systems A. Cellular Automata 8/24/08 1

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

Solving with Absolute Value

Lectures 5 & 6: Hypothesis Testing

Take the measurement of a person's height as an example. Assuming that her height has been determined to be 5' 8", how accurate is our result?

Projects in Geometry for High School Students

Decision, Computation and Language

Introduction to Computer Science and Programming for Astronomers

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity

Chapter 1: Useful definitions

SYMMETRIES & CONSERVATIONS LAWS. Stephen Haywood (RAL) Tel Symmetries & Conservation Laws Lecture 0, page1

II. Spatial Systems. A. Cellular Automata. Structure. Cellular Automata (CAs) Example: Conway s Game of Life. State Transition Rule

Experiments on the Consciousness Prior

Predicting AGI: What can we say when we know so little?

Computability Theory

Computer Sciences Department

Tree sets. Reinhard Diestel

MATH2206 Prob Stat/20.Jan Weekly Review 1-2

Nondeterministic finite automata

1 Distributional problems

Physical Geography Lab Activity #15

Math (P)Review Part I:

In Defense of Jeffrey Conditionalization

Turing Machines. Nicholas Geis. February 5, 2015

A Humble Introduction to DIJKSTRA S A A DISCIPLINE OF PROGRAMMING

Hardy s Paradox. Chapter Introduction

PHYSICS 715 COURSE NOTES WEEK 1

1 Probabilities. 1.1 Basics 1 PROBABILITIES

The Einstein-Podolsky-Rosen thought experiment and Bell s theorem

Computability Crib Sheet

Replay argument. Abstract. Tanasije Gjorgoski Posted on on 03 April 2006

1. Introduction to commutative rings and fields

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

Discrete Structures Proofwriting Checklist

The Einstein-Podolsky-Rosen thought-experiment and Bell s theorem

Automatic Differentiation and Neural Networks

Section 7.1: Functions Defined on General Sets

CS151 Complexity Theory. Lecture 1 April 3, 2017

Calculus of One Real Variable Prof. Joydeep Dutta Department of Economic Sciences Indian Institute of Technology, Kanpur

Descriptive Statistics (And a little bit on rounding and significant digits)

TOWARDS COMPLEXITY OF SCIENCE

Vector Spaces. Chapter 1

PROOF-THEORETIC REDUCTION AS A PHILOSOPHER S TOOL

Decoherence and the Classical Limit

Lecture notes on Turing machines

Vote. Vote on timing for night section: Option 1 (what we have now) Option 2. Lecture, 6:10-7:50 25 minute dinner break Tutorial, 8:15-9

Yet Another Objection to Fading and Dancing Qualia (5th draft)

Introduction. Introductory Remarks

Reading Passage. Darwin's Theory of Evolution - The Premise

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

KEVIN WALKER ON TQFTS

Introduction to Algebra: The First Week

ACTIVITY 5. Figure 5-1: Simulated electron interference pattern with your prediction for the next electron s position.

Solutions to November 2006 Problems

Supplementary Logic Notes CSE 321 Winter 2009

MAGIC Set theory. lecture 1

Undecidability and Intractability in Theoretical Physics

Boolean circuits. Lecture Definitions

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Special Theory of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay. Lecture - 15 Momentum Energy Four Vector

1. Introduction to commutative rings and fields

Modelling with cellular automata

Special Theory of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay

The Game (Introduction to Digital Physics) *

Transcription:

The World According to Wolfram Basic Summary of NKS- A New Kind of Science is Stephen Wolfram s attempt to revolutionize the theoretical and methodological underpinnings of the universe. Though this endeavor is incredibly ambitious and thus should be approached critically, Wolfram s book is not all talk: it contains many startling and thought provoking results and conjectures. - enormous complexity can be generated from simple rules - every system can be viewed as a computation: implications for philosophy and AI (visual evidence and rule 110 universality) - simple classification of randomness and complexity. - Threshold beyond which one gets no new fundamental complexity Notion of Computation - Underlies all of NKS - A computation is an operation that begins with some initial conditions and gives an output which follows from a definite set of rules. The most common example are computations performed by computers, in which the fixed set of rules may be the functions provided by a particular programming language. - classify systems according to what types of computations they can perform Classification of computation (output) 1: Almost all initial conditions lead to a one, homogenous state 2. Almost all initial conditions lead to repetitive or nesting structures 3. Almost all initial conditions lead to output that looks random 4. Almost all initial conditions lead to output that has aspects that are random but with definite localized structures that interact in random ways

- Analogy: initial conditions as inputs to the system; state of system after n steps is output (637) - KEY: think purely abstractly about the computation that is being performed (the output) and do not necessarily worry about how the system actually does the computation. - allows one to compare systems with very different internal structures but that have outputs that are nevertheless very similar. Good as a classification system.

Prevalence of Universality - universal system is one where the underlying construction is fixed, but it can be made to perform different tasks by being programmed in different ways - Main implication: universal system is capable of emulating any other system and thus must be able to produce behavior as complex as any other system. - historically universality was thought to be rare and complicated to construct. That was until rule 110 was shown to be universal. - implies that universality might be much more common than expected and indeed Wolfram demonstrates that basically every simple system he discusses in his book is universal. He believes that almost all systems of class 3 or 4 behavior will turn out to be universal. Implications - universality associated with all instances of general complex behavior - a wide variety of systems (CA, mobile CA s, substitution systems, Turing machines ), though internally different, are all computationally equivalent. - thus studying computation at an abstract level allows one to say meaningful things about a lot of systems - there is a threshold of complexity that is relatively low. Once this threshold is past, nothing more is gained in a computational sense - Because the threshold is so low almost any system can generate an arbitrary amount of complexity. Wolfram believes that threshold is between class 2 and 3. Principle of Computational Equivalence the major assumption of his book- all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations. (715) it is possible to think of any process that follows definite rules as being a computationregardless of the kinds of elements involved. (716) - thus we can view processes in nature as computations Therefore almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. (717) -this follows because Wolfram believes that all systems of class 3 and 4 behavior are universal and are therefore computationally equivalent. - implies that there is an upper limit to complexity that is relatively low More specifically, the principle of computational equivalence says that systems found in the natural world can perform computations up to a maximal ("universal") level of computational power, and that most systems do in fact attain this maximal level of computational power. Consequently, most systems are computationally equivalent. For example, the workings of the human brain or the evolution of weather systems can, in principle, compute the same things as a computer. Computation is therefore simply a question of translating inputs and outputs from one system to another. if some system seems complicated then it is most likely universal

PoCE allows for an explanation of complexity. Systems seem complex when they are of equivalent sophistication as any of the systems we use for perception and analysis. Thus, observers will tend to be computationally equivalent to the systems they observe- with the inevitable consequence that they will consider the behavior of such systems complex. (737) our processes of perception and analysis are of equivalent sophistication and the most powerful computers or even our minds must obey the PoCE. Can these (computer and brain) be more sophisticated? Presumably they cannot, at least if we want actual results, and not just generalities. Difference is that results require definite physical processes and are therefore subject to the same limitations of any process. Generation of Randomness 1.Externally imposed. 2. Randomness initial conditions 3. intrinsically generated -Wolfram believes that the third option underlies most of the generation of complexity in the physical world because of the ease of which nature seems to do it. Now that we know that we can generate enormous complexity from very simple rules it seems plausible that this is what nature does as well. Fits in with the idea of natural selection- nature just generates all possible outcomes and the ones that are feasible survive over time. Problem of Randomness: - how can one have a deterministic emergence of randomness? Does the answer lie in the introduction of interaction between localized structures? no because there are no localized structures to speak of. - Randomness is colloquially defined as lots of complicated stuff going on and without any patterns. Though the complexity in rule 30 fits this definition as well as the other statistical, mathematical and cryptographical definitions of randomness, it seems that there is something more to be explain. Computational Irreducibility and the problem of Prediction - most major triumphs of theoretical science involved finding a shortcut that allows one to determine the outcome of the evolution of a system without explicitly following its evolution - but with many phenomenons in nature no magic shortcut has ever been found. Could be a temporary issue but Wolfram believes that it is a consequence of PoCE called Computational Irreducibility.

If one views the evolution of a system as a computation, where each step in the evolution can be thought of taking a certain amount of computational effort, then traditional mathematics suggest that there must be a way to find a shortcut and know the outcome with less computational effort. The above picture suggests that this is not always true. This behavior is computationally irreducible. - This implies that even if one knows the underlying rules of a system, then to predict what the system will do can still take an irreducible amount of computational work. - To make meaningful predictions, it must at some level be the case that the systems making the predications outrun the systems it is trying to predict. But for this to happen the system making the predictions must be able to perform more sophisticated computations than the system it is trying to predict. (741) PoCE says this is not possible thus there are many systems where systematic prediction can not be done because the behavior is computationally irreducible.

- implies that explicit simulation may be the only way to attack a large class of problems instead of just being a convenient way to do mathematical expressions - helps explain how one can have a deterministic emergence of randomness as even though the rules are simple it is irreducibly complex to predict is evolution - the general question of what the system will do can be considered formally undecidable. Undecidability is common (755) Implications - though one can know the underlying rules for a systems behavior, there will almost never be an easy theory for any behavior that seems complex. (748) - essential features of complex behavior can be captured with models that have simple underlying structures. Thus use models that whose underlying rules are simple - Even though a system follows definite underlying rules, its overall behavior can still have aspects that cannot be described by reasonable laws - Free Will? For even though all the components of our brains presumably follow definite laws, I strongly suspect that their overall behavior corresponds to an irreducible computation whose outcome can never in effect be found by reasonable laws. (750) - Strong A.I. is possible