Building an Automatic Statistician

Similar documents
How to build an automatic statistician

The Automatic Statistician

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Nonparametric Bayesian Methods (Gaussian Processes)

STA414/2104 Statistical Methods for Machine Learning II

STA 4273H: Statistical Machine Learning

CSC2541 Lecture 2 Bayesian Occam s Razor and Gaussian Processes

Gaussian Processes (10/16/13)

GAUSSIAN PROCESS REGRESSION

Reliability Monitoring Using Log Gaussian Process Regression

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

Kernels for Automatic Pattern Discovery and Extrapolation

Probabilistic Graphical Models Lecture 20: Gaussian Processes

Automatic Model Construction with Gaussian Processes

Probabilistic & Unsupervised Learning

Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric?

Model Selection for Gaussian Processes

Nonparmeteric Bayes & Gaussian Processes. Baback Moghaddam Machine Learning Group

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes

20: Gaussian Processes

Bayesian Hidden Markov Models and Extensions

Pattern Recognition and Machine Learning

Lecture : Probabilistic Machine Learning

Unsupervised Learning

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

GWAS V: Gaussian processes

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression

CS 7140: Advanced Machine Learning

Gaussian Process Regression

STA 4273H: Sta-s-cal Machine Learning

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Gaussian Processes in Machine Learning

Introduction to Gaussian Processes

Nonparameteric Regression:

Probabilistic Models for Learning Data Representations. Andreas Damianou

ECE521 week 3: 23/26 January 2017

Bayesian Machine Learning

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

STA 4273H: Statistical Machine Learning

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

Introduction to Gaussian Process

Learning Gaussian Process Models from Uncertain Data

Probability and Information Theory. Sargur N. Srihari

SCUOLA DI SPECIALIZZAZIONE IN FISICA MEDICA. Sistemi di Elaborazione dell Informazione. Regressione. Ruggero Donida Labati

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

A Process over all Stationary Covariance Kernels

Based on slides by Richard Zemel

Bayesian Machine Learning

STAT 518 Intro Student Presentation

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Density Estimation. Seungjin Choi

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

COMP 551 Applied Machine Learning Lecture 20: Gaussian processes

Nonparametric Probabilistic Modelling

Introduction. Chapter 1

Introduction to Gaussian Processes

Machine Learning. Bayesian Regression & Classification. Marc Toussaint U Stuttgart

System identification and control with (deep) Gaussian processes. Andreas Damianou

CPSC 340: Machine Learning and Data Mining. More PCA Fall 2017

Gaussian Processes for Machine Learning

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

Advanced Introduction to Machine Learning CMU-10715

CPSC 540: Machine Learning

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen

MTTTS16 Learning from Multiple Sources

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Modeling human function learning with Gaussian processes

Learning from Data. Amos Storkey, School of Informatics. Semester 1. amos/lfd/

Gaussian Processes for Big Data. James Hensman

Disease mapping with Gaussian processes

Kernel methods, kernel SVM and ridge regression

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes

Optimization of Gaussian Process Hyperparameters using Rprop

A Brief Overview of Nonparametric Bayesian Models

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

Non-Parametric Bayes

Introduction to Machine Learning Midterm, Tues April 8

Recent Advances in Bayesian Inference Techniques

Announcements. Proposals graded

MODULE -4 BAYEIAN LEARNING

Relevance Vector Machines

Active and Semi-supervised Kernel Classification

Statistical Approaches to Learning and Discovery

Bayesian Linear Regression [DRAFT - In Progress]

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods

Machine Learning Srihari. Gaussian Processes. Sargur Srihari

Machine Learning Techniques for Computer Vision

COMP 551 Applied Machine Learning Lecture 21: Bayesian optimisation

Dynamic Probabilistic Models for Latent Feature Propagation in Social Networks

Probabilistic Machine Learning. Industrial AI Lab.

Relevance Vector Machines for Earthquake Response Spectra

Why experimenters should not randomize, and what they should do instead

Joint Emotion Analysis via Multi-task Gaussian Processes

CS-E4830 Kernel Methods in Machine Learning

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Transcription:

Building an Automatic Statistician Zoubin Ghahramani Department of Engineering University of Cambridge zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/ ALT-Discovery Science Conference, October 214 James Robert Lloyd David Duvenaud Roger Grosse Josh Tenenbaum Cambridge Cambridge! Harvard MIT! Toronto MIT

THERE IS A GROWING NEED FOR DATA ANALYSIS I We live in an era of abundant data I The McKinsey Global Institute claim I The United States alone faces a shortage of 14, to 19, people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the analysis of big data. I Diverse fields increasingly relying on expert statisticians, machine learning researchers and data scientists e.g. I Computational sciences (e.g. biology, astronomy,... ) I Online advertising I Quantitative finance I... James Robert Lloyd and Zoubin Ghahramani 2 / 44

GOALS OF THE AUTOMATIC STATISTICIAN PROJECT I Provide a set of tools for understanding data that require minimal expert input I Uncover challenging research problems in e.g. I Automated inference I Model construction and comparison I Data visualisation and interpretation I Advance the field of machine learning in general James Robert Lloyd and Zoubin Ghahramani 4 / 44

Background Probabilistic modelling Model selection and marginal likelihoods Bayesian nonparametrics Gaussian processes

Probabilistic Modelling A model describes data that one could observe from a system If we use the mathematics of probability theory to express all forms of uncertainty and noise associated with our model......then inverse probability (i.e. Bayes rule) allows us to infer unknown quantities, adapt our models, make predictions and learn from data.

Bayesian Machine Learning Everything follows from two simple rules: Sum rule: P (x) = P y P (x, y) Product rule: P (x, y) =P (x)p (y x) P ( D,m) = P (D,m)P ( m) P (D m) P (D,m) P ( m) P ( D,m) likelihood of parameters in model m prior probability of posterior of given data D Prediction: P (x D,m) = Z P (x, D,m)P ( D,m)d Model Comparison: P (m D) = P (D m) = P (D m)p(m) P (D) Z P (D,m)P( m) d

Model Comparison M = M = 1 M = 2 M = 3 4 4 4 4 2 2 2 2 2 5 1 2 5 1 2 5 1 2 5 1 M = 4 M = 5 M = 6 M = 7 4 4 4 4 2 2 2 2 2 5 1 2 5 1 2 5 1 2 5 1

Bayesian Occam s Razor and Model Comparison Compare model classes, e.g. m and m, using posterior probabilities given D: Z p(d m) p(m) p(m D) =, p(d m) = p(d,m) p( m) d p(d) Interpretations of the Marginal Likelihood ( model evidence ): The probability that randomly selected parameters from the prior would generate D. Probability of the data under the model, averaging over all possible parameter values. 1 log 2 is the number of bits of surprise at observing data D under model m. p(d m) Model classes that are too simple are unlikely to generate the data set. Model classes that are too complex can generate many possible data sets, so again, they are unlikely to generate that particular data set at random. P(D m) too simple "just right" too complex D All possible data sets of size n

Bayesian Model Comparison: Occam s Razor at Work M = M = 1 M = 2 M = 3 4 4 4 4 1 Model Evidence 2 2 2 2.8 2 5 1 M = 4 2 5 1 M = 5 2 5 1 M = 6 2 5 1 M = 7 P(Y M).6.4 4 4 4 4.2 2 2 2 2 1 2 3 4 5 6 7 M 2 5 1 2 5 1 2 5 1 2 5 1 For example, for quadratic polynomials (m =2): y = a + a 1 x + a 2 x 2 +, where N (, 2 ) and parameters =(a a 1 a 2 ) demo: polybayes

Parametric vs Nonparametric Models Parametric models assume some finite set of parameters. Given the parameters, future predictions, x, are independent of the observed data, D: P (x, D) =P (x ) therefore capture everything there is to know about the data. So the complexity of the model is bounded even if the amount of data is unbounded. This makes them not very flexible. Non-parametric models assume that the data distribution cannot be defined in terms of such a finite set of parameters. But they can often be defined by assuming an infinite dimensional. Usually we think of as a function. The amount of information that can capture about the data D can grow as the amount of data grows. This makes them more flexible.

Bayesian nonparametrics A simple framework for modelling complex data. Nonparametric models can be viewed as having infinitely many parameters Examples of non-parametric models: Parametric Non-parametric Application polynomial regression Gaussian processes function approx. logistic regression Gaussian process classifiers classification mixture models, k-means Dirichlet process mixtures clustering hidden Markov models infinite HMMs time series factor analysis / ppca / PMF infinite latent factor models feature discovery...

Nonlinear regression and Gaussian processes Consider the problem of nonlinear regression: You want to learn a function f with error bars from data D = {X, y} y x A Gaussian process defines a distribution over functions p(f) which can be used for Bayesian regression: p(f D) = p(f)p(d f) p(d) Let f =(f(x 1 ),f(x 2 ),...,f(x n )) be an n-dimensional vector of function values evaluated at n points x i 2 X. Note, f is a random variable. Definition: p(f) is a Gaussian process if for any finite subset {x 1,...,x n } X, the marginal distribution over that subset p(f) is multivariate Gaussian.

Apicture Linear Regression Logistic Regression Bayesian Linear Regression Bayesian Logistic Regression Kernel Regression Kernel Classification GP Regression GP Classification Classification Kernel Bayesian

Gaussian process covariance functions (kernels) p(f) is a Gaussian process if for any finite subset {x 1,...,x n } X, the marginal distribution over that finite subset p(f) has a multivariate Gaussian distribution. Gaussian processes (GPs) are parameterized by a mean function, µ(x), and a covariance function, or kernel, K(x, x ). p(f(x),f(x )) = N(µ, ) where µ = apple µ(x) µ(x ) = apple K(x, x) K(x, x ) K(x,x) K(x,x ) and similarly for p(f(x 1 ),...,f(x n )) where now µ is an n 1 vector and is an n n matrix.

Gaussian process covariance functions Gaussian processes (GPs) are parameterized by a mean function, µ(x), and a covariance function, K(x, x ), where µ(x) = E(f(x)) and K(x, x ) = Cov(f(x),f(x )). An example covariance function: x x K(x, x )=v exp + v 1 + v 2 ij r with parameters (v,v 1,v 2,r, ). These kernel parameters are interpretable and can be learned from data: v v 1 v 2 r signal variance variance of bias noise variance lengthscale roughness Once the mean and covariance functions are defined, everything else about GPs follows from the basic rules of probability applied to mutivariate Gaussians.

Samples from GPs with di erent K(x, x ) 3 1.5 2.5 5 2.5 2 1 2 1.5 4 1.5.5 1 3 f(x) 1.5 f(x) f(x).5 f(x) 2 1.5.5.5 1 1.5 1 1 1.5 1 2 1 2 3 4 5 6 7 8 9 1 3 x 1.5 1 2 3 4 5 6 7 8 9 1 3 x 2 1 2 3 4 5 6 7 8 9 1 4 x 2 1 2 3 4 5 6 7 8 9 1 8 x 2 2 3 6 1 1 2 4 1 2 f(x) f(x) f(x) f(x) 1 2 1 1 2 3 2 2 4 4 1 2 3 4 5 6 7 8 9 1 3 x 3 1 2 3 4 5 6 7 8 9 1 3 x 3 1 2 3 4 5 6 7 8 9 1 4 x 6 1 2 3 4 5 6 7 8 9 1 8 x 2 2 3 6 1 1 2 1 4 2 f(x) f(x) f(x) f(x) 1 2 1 1 2 2 3 2 3 4 4 1 2 3 4 5 6 7 8 9 1 gpdemogen x 3 1 2 3 4 5 6 7 8 9 1 x 4 1 2 3 4 5 6 7 8 9 1 x 6 1 2 3 4 5 6 7 8 9 1 x

Prediction using GPs with di erent K(x, x ) A sample from the prior for each covariance function: 3 1.5 2.5 f(x) 2.5 2 1.5 1.5.5 1 1.5 f(x) 1.5.5 1 f(x) 2 1.5 1.5.5 1 1.5 2 1 2 3 4 5 6 7 8 9 1 x 1.5 1 2 3 4 5 6 7 8 9 1 x 2 1 2 3 4 5 6 7 8 9 1 x Corresponding predictions, mean with two standard deviations: 1 1 2 1.8 1.5.5.6 1.5.5.4.2.2.5.5.5 1.4 1 1.6 1.5 1.5 5 1 15.8 5 1 15 2 5 1 15 1.5 5 1 15 gpdemo

The Automatic Statistician

WHAT WOULD AN AUTOMATIC STATISTICIAN DO? Language of models Translation Data Search Model Prediction Report Evaluation Checking James Robert Lloyd and Zoubin Ghahramani 3 / 44

GOALS OF THE AUTOMATIC STATISTICIAN PROJECT I Provide a set of tools for understanding data that require minimal expert input I Uncover challenging research problems in e.g. I Automated inference I Model construction and comparison I Data visualisation and interpretation I Advance the field of machine learning in general James Robert Lloyd and Zoubin Ghahramani 4 / 44

INGREDIENTS OF AN AUTOMATIC STATISTICIAN Language of models Translation Data Search Model Prediction Report Evaluation Checking I An open-ended language of models I Expressive enough to capture real-world phenomena... I... and the techniques used by human statisticians I A search procedure I To efficiently explore the language of models I A principled method of evaluating models I Trading off complexity and fit to data I A procedure to automatically explain the models I Making the assumptions of the models explicit... I... in a way that is intelligible to non-experts James Robert Lloyd and Zoubin Ghahramani 5 / 44

PREVIEW: ANENTIRELYAUTOMATICANALYSIS 7 Raw data 7 Full model posterior with extrapolations 6 6 5 5 4 4 3 3 2 2 1 1 195 1952 1954 1956 1958 196 1962 195 1952 1954 1956 1958 196 1962 Four additive components have been identified in the data I A linearly increasing function. I An approximately periodic function with a period of 1. years and with linearly increasing amplitude. I A smooth function. I Uncorrelated noise with linearly increasing standard deviation. James Robert Lloyd and Zoubin Ghahramani 6 / 44

DEFINING A LANGUAGE OF MODELS Language of models Translation Data Search Model Prediction Report Evaluation Checking James Robert Lloyd and Zoubin Ghahramani 7 / 44

DEFINING A LANGUAGE OF REGRESSION MODELS I Regression consists of learning a function f : X! Y from inputs to outputs from example input / output pairs I Language should include simple parametric forms... I e.g. Linear functions, Polynomials, Exponential functions I... as well as functions specified by high level properties I e.g. Smoothness, Periodicity I Inference should be tractable for all models in language James Robert Lloyd and Zoubin Ghahramani 8 / 44

WE CANBUILDREGRESSIONMODELSWITH GAUSSIAN PROCESSES I GPs are distributions over functions such that any finite subset of function evaluations, (f (x 1 ), f (x 2 ),...f (x N )), have a joint Gaussian distribution I A GP is completely specified by I Mean function, µ(x) =E(f (x)) I Covariance / kernel function, k(x, x )=Cov(f (x), f (x )) I Denoted f GP(µ, k) f(x) f(x) f(x) f(x) x GP Posterior Mean GP Posterior Uncertainty GP Posterior Mean GP Posterior Uncertainty Data x GP Posterior Mean GP Posterior Uncertainty Data x GP Posterior Mean GP Posterior Uncertainty Data x James Robert Lloyd and Zoubin Ghahramani 9 / 44

A LANGUAGE OF GAUSSIAN PROCESS KERNELS I It is common practice to use a zero mean function since the mean can be marginalised out I Suppose, f (x) a GP(a µ(x), k(x, x )) where a N (, 1) I Then equivalently, f (x) GP(,µ(x)µ(x )+k(x, x )) I We therefore define a language of GP regression models by specifying a language of kernels James Robert Lloyd and Zoubin Ghahramani 1 / 44

THE ATOMS OF OUR LANGUAGE Five base kernels Squared exp. (SE) Periodic (PER) Linear (LIN) Constant (C) White noise (WN) Encoding for the following types of functions Smooth functions Periodic functions Linear functions Constant functions Gaussian noise James Robert Lloyd and Zoubin Ghahramani 11 / 44

THE COMPOSITION RULES OF OUR LANGUAGE I Two main operations: addition, multiplication LIN LIN quadratic functions SE PER locally periodic LIN + PER periodic plus linear trend SE + PER periodic plus smooth trend James Robert Lloyd and Zoubin Ghahramani 12 / 44

MODELING CHANGEPOINTS Assume f 1 (x) GP(, k 1 ) and f 2 (x) GP(, k 2 ). Define: f (x) =(1 (x)) f 1 (x) + (x) f 2 (x) where is a sigmoid function between and 1. Then f GP(, k), where k(x, x )=(1 (x)) k 1 (x, x ) (1 (x )) + (x) k 2 (x, x ) (x ) We define the changepoint operator k = CP(k 1, k 2 ). James Robert Lloyd and Zoubin Ghahramani 13 / 44

AN EXPRESSIVELANGUAGEOFMODELS Regression model GP smoothing Linear regression Multiple kernel learning Trend, cyclical, irregular Fourier decomposition Sparse spectrum GPs Spectral mixture Changepoints Heteroscedasticity Kernel SE + WN C + LIN P + WN P SE + WN P SE + PER +WN C + P P cos + WN P cos + WN SE cos + WN e.g. CP(SE, SE)+WN e.g. SE + LIN WN Note: cos is a special case of our version of PER James Robert Lloyd and Zoubin Ghahramani 14 / 44

DISCOVERING A GOOD MODEL VIA SEARCH Language of models Translation Data Search Model Prediction Report Evaluation Checking James Robert Lloyd and Zoubin Ghahramani 15 / 44

DISCOVERING A GOOD MODEL VIA SEARCH I Language defined as the arbitrary composition of five base kernels (WN, C, LIN, SE, PER) via three operators (+,, CP). I The space spanned by this language is open-ended and can have a high branching factor requiring a judicious search I We propose a greedy search for its simplicity and similarity to human model-building James Robert Lloyd and Zoubin Ghahramani 16 / 44

EXAMPLE: MAUNA LOA KEELING CURVE 6 4 RQ Start 2 SE RQ LIN PER 2 2 25 21 SE + RQ... PER + RQ... PER RQ SE + PER + RQ... SE (PER + RQ)............ James Robert Lloyd and Zoubin Ghahramani 17 / 44

EXAMPLE: MAUNA LOA KEELING CURVE 4 ( Per + RQ ) 3 Start 2 1 SE RQ LIN PER 2 25 21 SE + RQ... PER + RQ... PER RQ SE + PER + RQ... SE (PER + RQ)............ James Robert Lloyd and Zoubin Ghahramani 17 / 44

EXAMPLE: MAUNA LOA KEELING CURVE 5 SE ( Per + RQ ) 4 Start 3 2 SE RQ LIN PER 1 2 25 21 SE + RQ... PER + RQ... PER RQ SE + PER + RQ... SE (PER + RQ)............ James Robert Lloyd and Zoubin Ghahramani 17 / 44

EXAMPLE: MAUNA LOA KEELING CURVE 6 ( SE + SE ( Per + RQ ) ) 4 Start 2 SE RQ LIN PER 2 25 21 SE + RQ... PER + RQ... PER RQ SE + PER + RQ... SE (PER + RQ)............ James Robert Lloyd and Zoubin Ghahramani 17 / 44

MODEL EVALUATION Language of models Translation Data Search Model Prediction Report Evaluation Checking James Robert Lloyd and Zoubin Ghahramani 18 / 44

MODEL EVALUATION I After proposing a new model its kernel parameters are optimised by conjugate gradients I We evaluate each optimised model, M, using the model evidence (marginal likelihood) which can be computed analytically for GPs I We penalise the marginal likelihood for the optimised kernel parameters using the Bayesian Information Criterion (BIC):.5 BIC(M) =log p(d M) p 2 log n where p is the number of kernel parameters, D represents the data, and n is the number of data points. James Robert Lloyd and Zoubin Ghahramani 19 / 44

AUTOMATIC TRANSLATION OF MODELS Language of models Translation Data Search Model Prediction Report Evaluation Checking James Robert Lloyd and Zoubin Ghahramani 2 / 44

AUTOMATIC TRANSLATION OF MODELS I Search can produce arbitrarily complicated models from open-ended language but two main properties allow description to be automated I Kernels can be decomposed into a sum of products I A sum of kernels corresponds to a sum of functions I Therefore, we can describe each product of kernels separately I Each kernel in a product modifies a model in a consistent way I Each kernel roughly corresponds to an adjective James Robert Lloyd and Zoubin Ghahramani 21 / 44

SUM OF PRODUCTS NORMAL FORM Suppose the search finds the following kernel SE (WN LIN + CP(C, PER)) James Robert Lloyd and Zoubin Ghahramani 22 / 44

SUM OF PRODUCTS NORMAL FORM Suppose the search finds the following kernel SE (WN LIN + CP(C, PER)) The changepoint can be converted into a sum of products SE (WN LIN + C + PER ) James Robert Lloyd and Zoubin Ghahramani 22 / 44

SUM OF PRODUCTS NORMAL FORM Suppose the search finds the following kernel SE (WN LIN + CP(C, PER)) The changepoint can be converted into a sum of products SE (WN LIN + C + PER ) Multiplication can be distributed over addition SE WN LIN + SE C + SE PER James Robert Lloyd and Zoubin Ghahramani 22 / 44

SUM OF PRODUCTS NORMAL FORM Suppose the search finds the following kernel SE (WN LIN + CP(C, PER)) The changepoint can be converted into a sum of products SE (WN LIN + C + PER ) Multiplication can be distributed over addition SE WN LIN + SE C + SE PER Simplification rules are applied WN LIN + SE + SE PER James Robert Lloyd and Zoubin Ghahramani 22 / 44

8 6 4 2 2 4 7 6 5 4 3 2 1 196 197 198 199 2 21 2 195 1952 1954 1956 1958 196 4 3 2 1 1 2 3 5 4 3 2 1 196 1965 197 1975 198 1985 199 1995 2 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 4 3 2 1 1 2 3 4 2 15 1 5 5 1 15 196 1965 197 1975 198 1985 199 1995 2 Posterior of component 2 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 2 1.5 1.5.5 1 1.5 2 4 3 2 1 1 2 3 4 196 1965 197 1975 198 1985 199 1995 2 Posterior of component 3 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 SUMS OF KERNELS ARE SUMS OF FUNCTIONS If f 1 GP(, k 1 ) and independently f 2 GP(, k 2 ) then e.g. f 1 + f 2 GP(, k 1 + k 2 ) = + + = + + We can therefore describe each component separately James Robert Lloyd and Zoubin Ghahramani 23 / 44

PRODUCTS OF KERNELS PER {z} periodic function On their own, each kernel is described by a standard noun phrase James Robert Lloyd and Zoubin Ghahramani 24 / 44

PRODUCTS OF KERNELS -SE SE {z} approximately PER {z} periodic function Multiplication by SE removes long range correlations from a model since SE(x, x ) decreases monotonically to as x x increases. James Robert Lloyd and Zoubin Ghahramani 25 / 44

PRODUCTS OF KERNELS -LIN SE {z} approximately PER {z} periodic function LIN {z} with linearly growing amplitude Multiplication by LIN is equivalent to multiplying the function being modeled by a linear function. If f (x) GP(, k), then xf(x) GP (, k LIN). This causes the standard deviation of the model to vary linearly without affecting the correlation. James Robert Lloyd and Zoubin Ghahramani 26 / 44

PRODUCTS OF KERNELS - CHANGEPOINTS SE {z} approximately PER {z} periodic function {z} LIN {z} with linearly growing amplitude until 17 Multiplication by is equivalent to multiplying the function being modeled by a sigmoid. James Robert Lloyd and Zoubin Ghahramani 27 / 44

AUTOMATICALLY GENERATED REPORTS Language of models Translation Data Search Model Prediction Report Evaluation Checking James Robert Lloyd and Zoubin Ghahramani 28 / 44

EXAMPLE: AIRLINE PASSENGER VOLUME 7 Raw data 7 Full model posterior with extrapolations 6 6 5 5 4 4 3 3 2 2 1 1 195 1952 1954 1956 1958 196 1962 195 1952 1954 1956 1958 196 1962 Four additive components have been identified in the data I A linearly increasing function. I An approximately periodic function with a period of 1. years and with linearly increasing amplitude. I A smooth function. I Uncorrelated noise with linearly increasing standard deviation. James Robert Lloyd and Zoubin Ghahramani 29 / 44

EXAMPLE: AIRLINE PASSENGER VOLUME This component is linearly increasing. 5 Posterior of component 1 7 Sum of components up to component 1 4 6 5 3 4 2 3 2 1 1 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 James Robert Lloyd and Zoubin Ghahramani 3 / 44

EXAMPLE: AIRLINE PASSENGER VOLUME This component is approximately periodic with a period of 1. years and varying amplitude. Across periods the shape of this function varies very smoothly. The amplitude of the function increases linearly. The shape of this function within each period has a typical lengthscale of 6. weeks. 2 Posterior of component 2 7 Sum of components up to component 2 15 6 1 5 5 4 3 5 2 1 1 15 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 James Robert Lloyd and Zoubin Ghahramani 31 / 44

EXAMPLE: AIRLINE PASSENGER VOLUME This component is a smooth function with a typical lengthscale of 8.1 months. 4 Posterior of component 3 7 Sum of components up to component 3 3 2 1 6 5 4 1 2 3 3 2 4 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 1 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 James Robert Lloyd and Zoubin Ghahramani 32 / 44

EXAMPLE: AIRLINE PASSENGER VOLUME This component models uncorrelated noise. The standard deviation of the noise increases linearly. 2 Posterior of component 4 7 Sum of components up to component 4 15 6 1 5 5 5 4 3 1 2 15 1 2 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 195 1951 1952 1953 1954 1955 1956 1957 1958 1959 196 James Robert Lloyd and Zoubin Ghahramani 33 / 44

EXAMPLE: SOLAR IRRADIANCE This component is constant. 1361 Posterior of component 1 1362 Sum of components up to component 1 136.9 1361.5 136.8 1361 136.7 136.6 136.5 136.5 165 17 175 18 185 19 195 2 136 165 17 175 18 185 19 195 2 James Robert Lloyd and Zoubin Ghahramani 34 / 44

EXAMPLE: SOLAR IRRADIANCE This component is constant. This component applies from 1643 until 1716. Posterior of component 2 1362 Sum of components up to component 2.1.2 1361.5.3.4 1361.5.6 136.5.7.8 165 17 175 18 185 19 195 2 136 165 17 175 18 185 19 195 2 James Robert Lloyd and Zoubin Ghahramani 35 / 44

EXAMPLE: SOLAR IRRADIANCE This component is a smooth function with a typical lengthscale of 23.1 years. This component applies until 1643 and from 1716 onwards..8 Posterior of component 3 1362 Sum of components up to component 3.6.4 1361.5.2 1361.2 136.5.4.6 165 17 175 18 185 19 195 2 136 165 17 175 18 185 19 195 2 James Robert Lloyd and Zoubin Ghahramani 36 / 44

EXAMPLE: SOLAR IRRADIANCE This component is approximately periodic with a period of 1.8 years. Across periods the shape of this function varies smoothly with a typical lengthscale of 36.9 years. The shape of this function within each period is very smooth and resembles a sinusoid. This component applies until 1643 and from 1716 onwards..6 Posterior of component 4 1362 Sum of components up to component 4.4.2 1361.5.2 1361.4 136.5.6.8 165 17 175 18 185 19 195 2 136 165 17 175 18 185 19 195 2 James Robert Lloyd and Zoubin Ghahramani 37 / 44

OTHER EXAMPLES See http://www.automaticstatistician.com James Robert Lloyd and Zoubin Ghahramani 38 / 44

GOOD PREDICTIVE PERFORMANCE AS WELL Standardised RMSE over 13 data sets Standardised RMSE 1. 1.5 2. 2.5 3. 3.5 ABCD accuracy ABCD interpretability Spectral kernels Trend, cyclical irregular Bayesian MKL Eureqa Changepoints Squared Exponential Linear regression I Tweaks can be made to the algorithm to improve accuracy or interpretability of models produced... I... but both methods are highly competitive at extrapolation (shown above) and interpolation James Robert Lloyd and Zoubin Ghahramani 39 / 44

MODEL CHECKING AND CRITICISM I Good statistical modelling should include model criticism: I Does the data match the assumptions of the model? I For example, if the model assumed Gaussian noise, does a Q-Q plot reveal non-gaussian residuals? I Our automatic statistician does posterior predictive checks, dependence tests and residual tests I We have also been developing more systematic nonparametric approaches to model criticism using kernel two-sample testing with MMD. Lloyd, J. R., and Ghahramani, Z. (214) Statistical Model Criticism using Kernel Two Sample Tests. http://mlg.eng.cam.ac.uk/lloyd/papers/kernel-model-checking.pdf James Robert Lloyd and Zoubin Ghahramani 4 / 44

CHALLENGES I Interpretability / accuracy I Increasing the expressivity of language I e.g. Monotonocity, positive functions, symmetries I Computational complexity of searching through a huge space of models I Extending the automatic reports to multidimensional datasets I Search and descriptions naturally extend to multiple dimensions, but automatically generating relevant visual summaries harder James Robert Lloyd and Zoubin Ghahramani 41 / 44

CURRENT AND FUTURE DIRECTIONS I Automatic statistician for: * One-dimensional time series * Linear regression (classical) I Multivariate nonlinear regression (c.f. Duvenaud, Lloyd et al, ICML 213) I Multivariate classification (w/ Nikola Mrksic) I Completing and interpreting tables and databases (w/ Kee Chong Tan) I Probabilistic programming I Probabilistic models are expressed in a general (Turing complete) programming language I A universal inference engine can then be used to infer unobserved variables given observed data I This can be used to implement seach over the model space in an automatic statistician James Robert Lloyd and Zoubin Ghahramani 42 / 44

SUMMARY I We have presented the beginnings of an automatic statistician I Our system I Defines an open-ended language of models I Searches greedily through this space I Produces detailed reports describing patterns in data I Performs automatic model criticism I Extrapolation and interpolation performance highly competitive I We believe this line of research has the potential to make powerful statistical model-building techniques accessible to non-experts James Robert Lloyd and Zoubin Ghahramani 43 / 44

REFERENCES Website: http://www.automaticstatistician.com Duvenaud, D., Lloyd, J. R., Grosse, R., Tenenbaum, J. B. and Ghahramani, Z. (213) Structure Discovery in Nonparametric Regression through Compositional Kernel Search. ICML 213. Lloyd, J. R., Duvenaud, D., Grosse, R., Tenenbaum, J. B. and Ghahramani, Z. (214) Automatic Construction and Natural-language Description of Nonparametric Regression Models AAAI 214. http://arxiv.org/pdf/142.434v2.pdf Lloyd, J. R., and Ghahramani, Z. (214) Statistical Model Criticism using Kernel Two Sample Tests http://mlg.eng.cam.ac.uk/lloyd/papers/kernel-model-checking.pdf Ghahramani, Z. (213) Bayesian nonparametrics and the probabilistic approach to modelling Philosophical Trans. Royal Society A 371: 211553. Looking for postdocs! James Robert Lloyd and Zoubin Ghahramani 44 / 44