Solomonoff Induction Violates Nicod s Criterion
|
|
- Alicia Mathews
- 5 years ago
- Views:
Transcription
1 Solomonoff Induction Violates Nicod s Criterion Jan Leike and Marcus Hutter ALT 15 6 October 2015
2 Outline The Paradox of Confirmation Solomonoff Induction Results Resolving the Paradox of Confirmation References
3 Motivation What does this green apple tell you about black ravens?
4 The Paradox of Confirmation Proposed by [Hempel, 1945]. H = all ravens are black
5 The Paradox of Confirmation Proposed by [Hempel, 1945]. H = all ravens are black H = all nonblack objects are nonravens
6 The Paradox of Confirmation Proposed by [Hempel, 1945]. H = all ravens are black H = all nonblack objects are nonravens Nicod s criterion: Something that is F and G confirms all F s are Gs = A nonblack nonraven confirms H
7 The Paradox of Confirmation Proposed by [Hempel, 1945]. H = all ravens are black H = all nonblack objects are nonravens Nicod s criterion: Something that is F and G confirms all F s are Gs = A nonblack nonraven confirms H Equivalence condition: Logically equivalent hypotheses are confirmed by the same evidence = A nonblack nonraven confirms H
8 The Paradox of Confirmation Proposed by [Hempel, 1945]. H = all ravens are black H = all nonblack objects are nonravens Nicod s criterion: Something that is F and G confirms all F s are Gs = A nonblack nonraven confirms H Equivalence condition: Logically equivalent hypotheses are confirmed by the same evidence = A nonblack nonraven confirms H Paradox?
9 Outline The Paradox of Confirmation Solomonoff Induction Results Resolving the Paradox of Confirmation References
10 Solomonoff Induction Let U be a universal monotone Turing machine. Solomonoff s universal prior [Solomonoff, 1964]: M(x) := p: U(p)=x... M is a probability distribution on X X 2 p
11 Solomonoff Induction Let U be a universal monotone Turing machine. Solomonoff s universal prior [Solomonoff, 1964]: M(x) := p: U(p)=x... M is a probability distribution on X X 2 p Solomonoff normalization: M norm (ɛ) := 1 and M(xa) M norm (xa) := M norm (x) b X M(xb) M norm is a probability distribution on X
12 Properties of Solomonoff Induction Observe (non-iid) data x <t := x 1 x 2... x t 1 X, predict arg max M(a x <t ) a X
13 Properties of Solomonoff Induction Observe (non-iid) data x <t := x 1 x 2... x t 1 X, predict arg max M(a x <t ) a X At most E + O( E) errors when observing data from a computable measure µ (E = errors of the predictor that knows µ) [Hutter, 2001]
14 Properties of Solomonoff Induction Observe (non-iid) data x <t := x 1 x 2... x t 1 X, predict arg max M(a x <t ) a X At most E + O( E) errors when observing data from a computable measure µ (E = errors of the predictor that knows µ) [Hutter, 2001] M merges with any computable measure µ [Blackwell and Dubins, 1962]: sup M(H x <t ) µ(h x <t ) 0 µ-a.s. as t H
15 Properties of Solomonoff Induction Observe (non-iid) data x <t := x 1 x 2... x t 1 X, predict arg max M(a x <t ) a X At most E + O( E) errors when observing data from a computable measure µ (E = errors of the predictor that knows µ) [Hutter, 2001] M merges with any computable measure µ [Blackwell and Dubins, 1962]: sup M(H x <t ) µ(h x <t ) 0 µ-a.s. as t H M is lower semicomputable, but M(xy x) is incomputable
16 Properties of Solomonoff Induction Observe (non-iid) data x <t := x 1 x 2... x t 1 X, predict arg max M(a x <t ) a X At most E + O( E) errors when observing data from a computable measure µ (E = errors of the predictor that knows µ) [Hutter, 2001] M merges with any computable measure µ [Blackwell and Dubins, 1962]: sup M(H x <t ) µ(h x <t ) 0 µ-a.s. as t H M is lower semicomputable, but M(xy x) is incomputable = M is really good at learning
17 Outline The Paradox of Confirmation Solomonoff Induction Results Resolving the Paradox of Confirmation References
18 Setup Alphabet = observations: X := {BR, BR, BR, BR}
19 Setup Alphabet = observations: X := {BR, BR, BR, BR} Hypothesis H = all ravens are black : H := {x X X x does not contain BR}
20 Setup Alphabet = observations: X := {BR, BR, BR, BR} Hypothesis H = all ravens are black : H := {x X X x does not contain BR} Data x <t drawn from a computable measure µ for t = 1, 2,...
21 Setup Alphabet = observations: X := {BR, BR, BR, BR} Hypothesis H = all ravens are black : H := {x X X x does not contain BR} Data x <t drawn from a computable measure µ for t = 1, 2,... M(H x <t ) is subjective belief in H at time step t
22 Setup Alphabet = observations: X := {BR, BR, BR, BR} Hypothesis H = all ravens are black : H := {x X X x does not contain BR} Data x <t drawn from a computable measure µ for t = 1, 2,... M(H x <t ) is subjective belief in H at time step t Confirmation and disconfirmation: µ(h) = 0 = t. M(H x <t ) = 0 µ-a.s. µ(h) = 1 = M(H x <t ) 1 µ-a.s.
23 Setup Alphabet = observations: X := {BR, BR, BR, BR} Hypothesis H = all ravens are black : H := {x X X x does not contain BR} Data x <t drawn from a computable measure µ for t = 1, 2,... M(H x <t ) is subjective belief in H at time step t Confirmation and disconfirmation: µ(h) = 0 = t. M(H x <t ) = 0 µ-a.s. µ(h) = 1 = M(H x <t ) 1 µ-a.s. Equivalence condition is satisfied.
24 Nicod s Criterion Question: Does a black raven confirm H: M(H x <t ) < M(H x <t BR)?
25 Nicod s Criterion Question: Does a black raven confirm H: M(H x <t ) < M(H x <t BR)? Question: Does a nonblack nonraven confirm H: M(H x <t ) < M(H x <t BR)?
26 Nicod s Criterion Question: Does a black raven confirm H: M(H x <t ) < M(H x <t BR)? Question: Does a nonblack nonraven confirm H: M(H x <t ) < M(H x <t BR)? Answer: Not always.
27 Solomonoff Induction and Nicod s Criterion Theorem (Counterfactual Black Raven Disconfirms H) Let x 1: H X be computable and x t BR infinitely often. = t N (with x t BR) s.t. M(H x <t BR) < M(H x <t )
28 Solomonoff Induction and Nicod s Criterion Theorem (Counterfactual Black Raven Disconfirms H) Let x 1: H X be computable and x t BR infinitely often. = t N (with x t BR) s.t. M(H x <t BR) < M(H x <t ) Theorem (Disconfirmation Infinitely Often for M) Let x 1: H be computable. = M(H x 1:t ) < M(H x <t ) infinitely often.
29 Solomonoff Induction and Nicod s Criterion Theorem (Counterfactual Black Raven Disconfirms H) Let x 1: H X be computable and x t BR infinitely often. = t N (with x t BR) s.t. M(H x <t BR) < M(H x <t ) Theorem (Disconfirmation Infinitely Often for M) Let x 1: H be computable. = M(H x 1:t ) < M(H x <t ) infinitely often. Theorem (Disconfirmation Finitely Often for M norm ) Let x 1: H be computable. = t 0 t > t 0. M norm (H x 1:t ) > M norm (H x <t ).
30 Solomonoff Induction and Nicod s Criterion Theorem (Counterfactual Black Raven Disconfirms H) Let x 1: H X be computable and x t BR infinitely often. = t N (with x t BR) s.t. M(H x <t BR) < M(H x <t ) Theorem (Disconfirmation Infinitely Often for M) Let x 1: H be computable. = M(H x 1:t ) < M(H x <t ) infinitely often. Theorem (Disconfirmation Finitely Often for M norm ) Let x 1: H be computable. = t 0 t > t 0. M norm (H x 1:t ) > M norm (H x <t ). Theorem (Disconfirmation Infinitely Often for M norm ) There is an (incomputable) x 1: H s.t. M norm (H x 1:t ) < M norm (H x <t ) infinitely often.
31 Outline The Paradox of Confirmation Solomonoff Induction Results Resolving the Paradox of Confirmation References
32 Resolving the Paradox of Confirmation I Solution: Reject Nicod s criterion! [Good, 1967, Jaynes, 2003, Vranas, 2004] Not all black ravens confirm H.
33 Resolving the Paradox of Confirmation II In the literature there are perhaps 100 paradoxes and controversies which are like this, in that they arise from faulty intuition rather than faulty mathematics. Someone asserts a general principle that seems to him intuitively right. Then, when probability analysis reveals the error, instead of taking this opportunity to educate his intuition, he reacts by rejecting the probability analysis. [Jaynes, 2003, p. 144]
34 Outline The Paradox of Confirmation Solomonoff Induction Results Resolving the Paradox of Confirmation References
35 References I Blackwell, D. and Dubins, L. (1962). Merging of opinions with increasing information. The Annals of Mathematical Statistics, pages Good, I. J. (1967). The white shoe is a red herring. The British Journal for the Philosophy of Science, 17(4): Hempel, C. G. (1945). Studies in the logic of confirmation (I.). Mind, pages Hutter, M. (2001). New error bounds for Solomonoff prediction. Journal of Computer and System Sciences, 62(4): Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge University Press.
36 References II Solomonoff, R. (1964). A formal theory of inductive inference. Parts 1 and 2. Information and Control, 7(1):1 22 and Vranas, P. B. (2004). Hempel s raven paradox: A lacuna in the standard Bayesian solution. The British Journal for the Philosophy of Science, 55(3):
Solomonoff Induction Violates Nicod s Criterion
Solomonoff Induction Violates Nicod s Criterion Jan Leike and Marcus Hutter Australian National University {jan.leike marcus.hutter}@anu.edu.au Abstract. Nicod s criterion states that observing a black
More informationI. Induction, Probability and Confirmation: Introduction
I. Induction, Probability and Confirmation: Introduction 1. Basic Definitions and Distinctions Singular statements vs. universal statements Observational terms vs. theoretical terms Observational statement
More informationPredictive MDL & Bayes
Predictive MDL & Bayes Marcus Hutter Canberra, ACT, 0200, Australia http://www.hutter1.net/ ANU RSISE NICTA Marcus Hutter - 2 - Predictive MDL & Bayes Contents PART I: SETUP AND MAIN RESULTS PART II: FACTS,
More informationConfirmation Theory. Pittsburgh Summer Program 1. Center for the Philosophy of Science, University of Pittsburgh July 7, 2017
Confirmation Theory Pittsburgh Summer Program 1 Center for the Philosophy of Science, University of Pittsburgh July 7, 2017 1 Confirmation Disconfirmation 1. Sometimes, a piece of evidence, E, gives reason
More informationOn the Foundations of Universal Sequence Prediction
Marcus Hutter - 1 - Universal Sequence Prediction On the Foundations of Universal Sequence Prediction Marcus Hutter Istituto Dalle Molle di Studi sull Intelligenza Artificiale IDSIA, Galleria 2, CH-6928
More informationPhilosophy 240 Symbolic Logic. Russell Marcus Hamilton College Fall 2014
Philosophy 240 Symbolic Logic Russell Marcus Hamilton College Fall 2014 Class #19: Logic and the Philosophy of Science Marcus, Symbolic Logic, Fall 2014: Logic and the Philosophy of Science, Slide 1 Three
More informationUniversal Convergence of Semimeasures on Individual Random Sequences
Marcus Hutter - 1 - Convergence of Semimeasures on Individual Sequences Universal Convergence of Semimeasures on Individual Random Sequences Marcus Hutter IDSIA, Galleria 2 CH-6928 Manno-Lugano Switzerland
More informationUniversal probability distributions, two-part codes, and their optimal precision
Universal probability distributions, two-part codes, and their optimal precision Contents 0 An important reminder 1 1 Universal probability distributions in theory 2 2 Universal probability distributions
More informationOn the Optimality of General Reinforcement Learners
Journal of Machine Learning Research 1 (2015) 1-9 Submitted 0/15; Published 0/15 On the Optimality of General Reinforcement Learners Jan Leike Australian National University Canberra, ACT, Australia Marcus
More informationIn Defense of Jeffrey Conditionalization
In Defense of Jeffrey Conditionalization Franz Huber Department of Philosophy University of Toronto Please do not cite! December 31, 2013 Contents 1 Introduction 2 2 Weisberg s Paradox 3 3 Jeffrey Conditionalization
More informationCSCI3390-Lecture 6: An Undecidable Problem
CSCI3390-Lecture 6: An Undecidable Problem September 21, 2018 1 Summary The language L T M recognized by the universal Turing machine is not decidable. Thus there is no algorithm that determines, yes or
More informationPROBABILITY CAPTURES THE LOGIC OF SCIENTIFIC CONFIRMATION. 1. Introduction
PROBABILITY CAPTURES THE LOGIC OF SCIENTIFIC CONFIRMATION 1. Introduction Confirmation is a word in ordinary language and, like many such words, its meaning is vague and perhaps also ambiguous. So if we
More informationAlgorithmic Probability
Algorithmic Probability From Scholarpedia From Scholarpedia, the free peer-reviewed encyclopedia p.19046 Curator: Marcus Hutter, Australian National University Curator: Shane Legg, Dalle Molle Institute
More informationPhilosophy 148 Announcements & Such
Branden Fitelson Philosophy 148 Lecture 1 Philosophy 148 Announcements & Such I hope you all enjoyed Mike & Kenny s lectures last week! HW #3 grades are posted. People did very well (µ = 93, σ = 8). I
More informationLoss Bounds and Time Complexity for Speed Priors
Loss Bounds and Time Complexity for Speed Priors Daniel Filan, Marcus Hutter, Jan Leike Abstract This paper establishes for the first time the predictive performance of speed priors and their computational
More informationConfident Bayesian Sequence Prediction. Tor Lattimore
Confident Bayesian Sequence Prediction Tor Lattimore Sequence Prediction Can you guess the next number? 1, 2, 3, 4, 5,... 3, 1, 4, 1, 5,... 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1,
More informationOckham Efficiency Theorem for Randomized Scientific Methods
Ockham Efficiency Theorem for Randomized Scientific Methods Conor Mayo-Wilson and Kevin T. Kelly Department of Philosophy Carnegie Mellon University Formal Epistemology Workshop (FEW) June 19th, 2009 1
More informationOn the Computability of AIXI
On the Computability of AIXI Jan Leike Australian National University jan.leike@anu.edu.au Marcus Hutter Australian National University marcus.hutter@anu.edu.au Abstract How could we solve the machine
More informationAlgorithmic probability, Part 1 of n. A presentation to the Maths Study Group at London South Bank University 09/09/2015
Algorithmic probability, Part 1 of n A presentation to the Maths Study Group at London South Bank University 09/09/2015 Motivation Effective clustering the partitioning of a collection of objects such
More informationFor True Conditionalizers Weisberg s Paradox is a False Alarm
For True Conditionalizers Weisberg s Paradox is a False Alarm Franz Huber Department of Philosophy University of Toronto franz.huber@utoronto.ca http://huber.blogs.chass.utoronto.ca/ July 7, 2014; final
More informationFor True Conditionalizers Weisberg s Paradox is a False Alarm
For True Conditionalizers Weisberg s Paradox is a False Alarm Franz Huber Abstract: Weisberg (2009) introduces a phenomenon he terms perceptual undermining He argues that it poses a problem for Jeffrey
More informationBayesian Inference: What, and Why?
Winter School on Big Data Challenges to Modern Statistics Geilo Jan, 2014 (A Light Appetizer before Dinner) Bayesian Inference: What, and Why? Elja Arjas UH, THL, UiO Understanding the concepts of randomness
More informationA Philosophical Treatise of Universal Induction
Entropy 2011, 13, 1076-1136; doi:10.3390/e13061076 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article A Philosophical Treatise of Universal Induction Samuel Rathmanner and Marcus Hutter
More informationThe Paradox of Confirmation 1
Philosophy Compass 1/1 (2006): 95 113, 10.1111/j.1747-9991.2006.00011.x Blackwell Oxford, PHCO Philosophy 0000-0000 xxx The Paradox UKPublishing, Compass of Confirmation Ltd. The Paradox of Confirmation
More informationUniversal Prediction of Selected Bits
Universal Prediction of Selected Bits Tor Lattimore and Marcus Hutter and Vaibhav Gavane Australian National University tor.lattimore@anu.edu.au Australian National University and ETH Zürich marcus.hutter@anu.edu.au
More informationCISC 876: Kolmogorov Complexity
March 27, 2007 Outline 1 Introduction 2 Definition Incompressibility and Randomness 3 Prefix Complexity Resource-Bounded K-Complexity 4 Incompressibility Method Gödel s Incompleteness Theorem 5 Outline
More informationUncertainty & Induction in AGI
Uncertainty & Induction in AGI Marcus Hutter Canberra, ACT, 0200, Australia http://www.hutter1.net/ ANU AGI ProbOrNot Workshop 31 July 2013 Marcus Hutter - 2 - Uncertainty & Induction in AGI Abstract AGI
More informationApplied Statistics for the Behavioral Sciences
Applied Statistics for the Behavioral Sciences Chapter 8 One-sample designs Hypothesis testing/effect size Chapter Outline Hypothesis testing null & alternative hypotheses alpha ( ), significance level,
More informationIs there an Elegant Universal Theory of Prediction?
Is there an Elegant Universal Theory of Prediction? Shane Legg Dalle Molle Institute for Artificial Intelligence Manno-Lugano Switzerland 17th International Conference on Algorithmic Learning Theory Is
More informationEvidence with Uncertain Likelihoods
Evidence with Uncertain Likelihoods Joseph Y. Halpern Cornell University Ithaca, NY 14853 USA halpern@cs.cornell.edu Riccardo Pucella Cornell University Ithaca, NY 14853 USA riccardo@cs.cornell.edu Abstract
More informationUniversal Learning Theory
Universal Learning Theory Marcus Hutter RSISE @ ANU and SML @ NICTA Canberra, ACT, 0200, Australia marcus@hutter1.net www.hutter1.net 25 May 2010 Abstract This encyclopedic article gives a mini-introduction
More informationA proof of Bell s inequality in quantum mechanics using causal interactions
A proof of Bell s inequality in quantum mechanics using causal interactions James M. Robins, Tyler J. VanderWeele Departments of Epidemiology and Biostatistics, Harvard School of Public Health Richard
More informationData Mining Chapter 4: Data Analysis and Uncertainty Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 4: Data Analysis and Uncertainty Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Why uncertainty? Why should data mining care about uncertainty? We
More informationMachine Learning. Regularization and Feature Selection. Fabio Vandin November 13, 2017
Machine Learning Regularization and Feature Selection Fabio Vandin November 13, 2017 1 Learning Model A: learning algorithm for a machine learning task S: m i.i.d. pairs z i = (x i, y i ), i = 1,..., m,
More informationCS154, Lecture 10: Rice s Theorem, Oracle Machines
CS154, Lecture 10: Rice s Theorem, Oracle Machines Moral: Analyzing Programs is Really, Really Hard But can we more easily tell when some program analysis problem is undecidable? Problem 1 Undecidable
More informationAn Introduction to Bayesian Reasoning
Tilburg Center for Logic and Philosophy of Science (TiLPS) Tilburg University, The Netherlands EPS Seminar, TiLPS, 9 October 2013 Overview of the Tutorial This tutorial aims at giving you an idea of why
More informationThe Paradox of Confirmation. Branden Fitelson 1. University of California Berkeley.
The Paradox of Confirmation Branden Fitelson 1 University of California Berkeley branden@fitelson.org Abstract. Hempel first introduced the paradox of confirmation in (Hempel 1937). Since then, a very
More informationMeasuring Agent Intelligence via Hierarchies of Environments
Measuring Agent Intelligence via Hierarchies of Environments Bill Hibbard SSEC, University of Wisconsin-Madison, 1225 W. Dayton St., Madison, WI 53706, USA test@ssec.wisc.edu Abstract. Under Legg s and
More informationTheory Change and Bayesian Statistical Inference
Theory Change and Bayesian Statistical Inference Jan-Willem Romeyn Department of Philosophy, University of Groningen j.w.romeyn@philos.rug.nl Abstract This paper addresses the problem that Bayesian statistical
More informationReasons for Rejecting a Counterfactual Analysis of Conclusive Reasons
Reasons for Rejecting a Counterfactual Analysis of Conclusive Reasons Abstract In a recent article [1], Charles Cross applies Lewisian counterfactual logic to explicate and evaluate Fred Dretske s conclusive
More informationCarl Hempel Laws and Their Role in Scientific Explanation Two basic requirements for scientific explanations
Carl Hempel Laws and Their Role in Scientific Explanation 1 5.1 Two basic requirements for scientific explanations The aim of the natural sciences is explanation insight rather than fact gathering. Man
More informationModes of Convergence to the Truth: Steps toward a Better Epistemology of Induction
Modes of Convergence to the Truth: Steps toward a Better Epistemology of Induction Hanti Lin UC Davis ika@ucdavis.edu April 26, 2018 Abstract There is a large group of people engaging in normative or evaluative
More informationHow Bayesian Confirmation Theory Handles the Paradox of the Ravens
1 How Bayesian Confirmation Theory Handles the Paradox of the Ravens Branden Fitelson University of California Berkeley branden@fitelson.org James Hawthorne University of Oklahoma hawthorne@ou.edu 1. Introduction
More informationKolmogorov complexity and its applications
CS860, Winter, 2010 Kolmogorov complexity and its applications Ming Li School of Computer Science University of Waterloo http://www.cs.uwaterloo.ca/~mli/cs860.html We live in an information society. Information
More informationRemarks on Random Sequences
Australasian Journal of Logic Remarks on Random Sequences Branden Fitelson * and Daniel Osherson ** * Department of Philosophy, Rutgers University ** Department of Psychology, Princeton University Abstract
More informationMachine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io
Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem
More informationA RIGOROUS PROOF OF THE ARC LENGTH AND LINE INTEGRAL FORMULA USING THE RIEMANN INTEGRAL
A RGOROUS ROOF OF THE ARC LENGTH AND LNE NTEGRAL FORMULA USNG THE REMANN NTEGRAL ZACHARY DESTEFANO Abstract. n this paper, provide the rigorous mathematical definiton of an arc in general Euclidean Space.
More informationECS 120 Lesson 15 Turing Machines, Pt. 1
ECS 120 Lesson 15 Turing Machines, Pt. 1 Oliver Kreylos Wednesday, May 2nd, 2001 Before we can start investigating the really interesting problems in theoretical computer science, we have to introduce
More informationHypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006
Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)
More informationarxiv: v1 [cs.lg] 28 May 2011
A Philosophical Treatise of Universal Induction Samuel Rathmanner and Marcus Hutter Research School of Computer Science Australian National University arxiv:1105.5721v1 [cs.lg] 28 May 2011 25 May 2011
More informationIntroduction to Bayesian Learning. Machine Learning Fall 2018
Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability
More informationPOL502: Foundations. Kosuke Imai Department of Politics, Princeton University. October 10, 2005
POL502: Foundations Kosuke Imai Department of Politics, Princeton University October 10, 2005 Our first task is to develop the foundations that are necessary for the materials covered in this course. 1
More informationPoincaré s Thesis CONTENTS. Peter Fekete
Poincaré s Thesis CONTENTS Peter Fekete COPYWRITE PETER FEKETE 6 TH OCT. 2011 NO PART OF THIS PAPER MAY BE REPRODUCED OR TRANSMITTED IN ANY FORM OR BY ANY MEANS ELECTRONIC OR MECHANICAL, INCLUDING PHOTOCOPY,
More informationLecture 2 Machine Learning Review
Lecture 2 Machine Learning Review CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago March 29, 2017 Things we will look at today Formal Setup for Supervised Learning Things
More informationRemarks on Random Sequences
Remarks on Random Sequences Branden Fitelson & Daniel Osherson 1 Setup We consider evidence relevant to whether a (possibly idealized) physical process is producing its output randomly. For definiteness,
More informationCMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th
CMPT 882 - Machine Learning Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th Stephen Fagan sfagan@sfu.ca Overview: Introduction - Who was Bayes? - Bayesian Statistics Versus Classical Statistics
More informationUniversal probability-free conformal prediction
Universal probability-free conformal prediction Vladimir Vovk and Dusko Pavlovic March 20, 2016 Abstract We construct a universal prediction system in the spirit of Popper s falsifiability and Kolmogorov
More information03. Induction and Confirmation. 1. Induction. Topic: Relation between theory and evidence.
03. Induction and Confirmation Topic: Relation between theory and evidence. 1. Induction Problem of Induction: What reason do we have for thinking the future will resemble the past? - Initial Response:
More informationDistribution of Environments in Formal Measures of Intelligence: Extended Version
Distribution of Environments in Formal Measures of Intelligence: Extended Version Bill Hibbard December 2008 Abstract This paper shows that a constraint on universal Turing machines is necessary for Legg's
More informationOnline Prediction: Bayes versus Experts
Marcus Hutter - 1 - Online Prediction Bayes versus Experts Online Prediction: Bayes versus Experts Marcus Hutter Istituto Dalle Molle di Studi sull Intelligenza Artificiale IDSIA, Galleria 2, CH-6928 Manno-Lugano,
More informationKolmogorov complexity and its applications
Spring, 2009 Kolmogorov complexity and its applications Paul Vitanyi Computer Science University of Amsterdam http://www.cwi.nl/~paulv/course-kc We live in an information society. Information science is
More informationMeasuring Statistical Evidence Using Relative Belief
Measuring Statistical Evidence Using Relative Belief Michael Evans Department of Statistics University of Toronto Abstract: A fundamental concern of a theory of statistical inference is how one should
More informationComplexity 6: AIT. Outline. Dusko Pavlovic. Kolmogorov. Solomonoff. Chaitin: The number of wisdom RHUL Spring Complexity 6: AIT.
Outline Complexity Theory Part 6: did we achieve? Algorithmic information and logical depth : Algorithmic information : Algorithmic probability : The number of wisdom RHUL Spring 2012 : Logical depth Outline
More informationUnifying Probability and Logic for Learning
Unifying Probability and Logic for Learning Marcus Hutter Research School of Computer Science The Australian National University marcus.hutter@anu.edu.au John W. Lloyd Research School of Computer Science
More informationDesire-as-belief revisited
Desire-as-belief revisited Richard Bradley and Christian List June 30, 2008 1 Introduction On Hume s account of motivation, beliefs and desires are very di erent kinds of propositional attitudes. Beliefs
More informationINDUCTIVE INFERENCE THEORY A UNIFIED APPROACH TO PROBLEMS IN PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE
INDUCTIVE INFERENCE THEORY A UNIFIED APPROACH TO PROBLEMS IN PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE Ray J. Solomonoff Visiting Professor, Computer Learning Research Center Royal Holloway, University
More informationA Formal Solution to the Grain of Truth Problem
A Formal Solution to the Grain of Truth Problem Jan Leike Australian National University jan.leike@anu.edu.au Jessica Taylor Machine Intelligence Research Inst. jessica@intelligence.org Benya Fallenstein
More informationUnderstanding Computation
Understanding Computation 1 Mathematics & Computation -Mathematics has been around for a long time as a method of computing. -Efforts to find canonical way of computations. - Machines have helped with
More informationComplexity Theory Turing Machines
Complexity Theory Turing Machines Joseph Spring Department of Computer Science 3COM0074 - Quantum Computing / QIP QC - Lecture 2 1 Areas for Discussion Algorithms Complexity Theory and Computing Models
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More informationarxiv: v1 [math.st] 22 Feb 2013
arxiv:1302.5468v1 [math.st] 22 Feb 2013 What does the proof of Birnbaum s theorem prove? Michael Evans Department of Statistics University of Toronto Abstract: Birnbaum s theorem, that the sufficiency
More informationLecture : Probabilistic Machine Learning
Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning
More informationPhilosophy 240 Symbolic Logic. Russell Marcus Hamilton College Fall 2013
Philosophy 240 Symbolic Logic Russell Marcus Hamilton College Fall 2013 Class #4 Philosophy Friday #1: Conditionals Marcus, Symbolic Logic, Fall 2013, Slide 1 Natural-Language Conditionals A. Indicative
More informationLecture Topic 4: Chapter 7 Sampling and Sampling Distributions
Lecture Topic 4: Chapter 7 Sampling and Sampling Distributions Statistical Inference: The aim is to obtain information about a population from information contained in a sample. A population is the set
More informationComment on Leitgeb s Stability Theory of Belief
Comment on Leitgeb s Stability Theory of Belief Hanti Lin Kevin T. Kelly Carnegie Mellon University {hantil, kk3n}@andrew.cmu.edu Hannes Leitgeb s stability theory of belief provides three synchronic constraints
More informationarxiv: v1 [math.st] 10 Aug 2007
The Marginalization Paradox and the Formal Bayes Law arxiv:0708.1350v1 [math.st] 10 Aug 2007 Timothy C. Wallstrom Theoretical Division, MS B213 Los Alamos National Laboratory Los Alamos, NM 87545 tcw@lanl.gov
More informationSOLOMONOFF PREDICTION AND OCCAM S RAZOR
SOLOMONOFF PREDICTION AND OCCAM S RAZOR TOM F. STERKENBURG Abstract. Algorithmic information theory gives an idealized notion of compressibility, that is often presented as an objective measure of simplicity.
More informationAn Introduction to Laws of Large Numbers
An to Laws of John CVGMI Group Contents 1 Contents 1 2 Contents 1 2 3 Contents 1 2 3 4 Intuition We re working with random variables. What could we observe? {X n } n=1 Intuition We re working with random
More informationStatistical and Inductive Inference by Minimum Message Length
C.S. Wallace Statistical and Inductive Inference by Minimum Message Length With 22 Figures Springer Contents Preface 1. Inductive Inference 1 1.1 Introduction 1 1.2 Inductive Inference 5 1.3 The Demise
More informationIntroduction to Machine Learning. Lecture 2
Introduction to Machine Learning Lecturer: Eran Halperin Lecture 2 Fall Semester Scribe: Yishay Mansour Some of the material was not presented in class (and is marked with a side line) and is given for
More informationPei Wang( 王培 ) Temple University, Philadelphia, USA
Pei Wang( 王培 ) Temple University, Philadelphia, USA Artificial General Intelligence (AGI): a small research community in AI that believes Intelligence is a general-purpose capability Intelligence should
More informationOrdinary Least Squares Linear Regression
Ordinary Least Squares Linear Regression Ryan P. Adams COS 324 Elements of Machine Learning Princeton University Linear regression is one of the simplest and most fundamental modeling ideas in statistics
More informationMost General computer?
Turing Machines Most General computer? DFAs are simple model of computation. Accept only the regular languages. Is there a kind of computer that can accept any language, or compute any function? Recall
More informationIndicative conditionals
Indicative conditionals PHIL 43916 November 14, 2012 1. Three types of conditionals... 1 2. Material conditionals... 1 3. Indicatives and possible worlds... 4 4. Conditionals and adverbs of quantification...
More informationRelationship between Loss Functions and Confirmation Measures
Relationship between Loss Functions and Confirmation Measures Krzysztof Dembczyński 1 and Salvatore Greco 2 and Wojciech Kotłowski 1 and Roman Słowiński 1,3 1 Institute of Computing Science, Poznań University
More informationCritical Notice: Bas van Fraassen, Scientific Representation: Paradoxes of Perspective Oxford University Press, 2008, xiv pages
Critical Notice: Bas van Fraassen, Scientific Representation: Paradoxes of Perspective Oxford University Press, 2008, xiv + 408 pages by Bradley Monton June 24, 2009 It probably goes without saying that
More informationAdversarial Sequence Prediction
Adversarial Sequence Prediction Bill HIBBARD University of Wisconsin - Madison Abstract. Sequence prediction is a key component of intelligence. This can be extended to define a game between intelligent
More informationThe No Alternatives Argument
The No Alternatives Argument Richard Dawid Stephan Hartmann Jan Sprenger January 20, 202 Abstract Scientific theories are hard to find, and once scientists have found a theory H, they often believe that
More informationBayesian Concept Learning
Learning from positive and negative examples Bayesian Concept Learning Chen Yu Indiana University With both positive and negative examples, it is easy to define a boundary to separate these two. Just with
More information6.2 Deeper Properties of Continuous Functions
6.2. DEEPER PROPERTIES OF CONTINUOUS FUNCTIONS 69 6.2 Deeper Properties of Continuous Functions 6.2. Intermediate Value Theorem and Consequences When one studies a function, one is usually interested in
More information6.867 Machine Learning
6.867 Machine Learning Problem set 1 Solutions Thursday, September 19 What and how to turn in? Turn in short written answers to the questions explicitly stated, and when requested to explain or prove.
More informationIntegration Made Easy
Integration Made Easy Sean Carney Department of Mathematics University of Texas at Austin Sean Carney (University of Texas at Austin) Integration Made Easy October 25, 2015 1 / 47 Outline 1 - Length, Geometric
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 8 Probability and Statistical Models Motivating ideas AMS 131: Suppose that the random variable
More informationModels. Models of Computation, Turing Machines, and the Limits of Turing Computation. Effective Calculability. Motivation for Models of Computation
Turing Computation /0/ Models of Computation, Turing Machines, and the Limits of Turing Computation Bruce MacLennan Models A model is a tool intended to address a class of questions about some domain of
More informationAssociation studies and regression
Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration
More informationSTAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01
STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist
More informationInduction, confirmation, and underdetermination
Induction, confirmation, and underdetermination Christian Wüthrich http://philosophy.ucsd.edu/faculty/wuthrich/ 145 Philosophy of Science The Mother of All Problems... Hume s problem of induction A brief
More informationHuman-level concept learning through probabilistic program induction
B.M Lake, R. Salakhutdinov, J.B. Tenenbaum Human-level concept learning through probabilistic program induction journal club at two aspects in which machine learning spectacularly lags behind human learning
More informationCreative Objectivism, a powerful alternative to Constructivism
Creative Objectivism, a powerful alternative to Constructivism Copyright c 2002 Paul P. Budnik Jr. Mountain Math Software All rights reserved Abstract It is problematic to allow reasoning about infinite
More informationMihai Boicu, Gheorghe Tecuci, Dorin Marcu Learning Agent Center, George Mason University
Mihai Boicu, Gheorghe Tecuci, Dorin Marcu Learning Agent Center, George Mason University The Seventh International Conference on Semantic Technologies for Intelligence, Defense, and Security STIDS Fairfax,
More information