Causal Modeling with Generative Neural Networks
|
|
- Brittney George
- 6 years ago
- Views:
Transcription
1 Causal Modeling with Generative Neural Networks Michele Sebag TAO, CNRS INRIA LRI Université Paris-Sud Joint work: D. Kalainathan, O. Goudet, I. Guyon, M. Hajaiej, A. Decelle, C. Furtlehner Credit for slides: Yann LeCun Leiden Sept / 27
2 Motivation State of art Causal Generative Neural Nets Naive ML Approach to SW 2 / 27
3 ML: discriminative or generative modelling usually iid samples P(X, Y ) Given a training set E = {(xi, yi ), xi IRd, i [[1, n]]} Find I I b X ) Supervised learning: h : X 7 Y or P(Y b,y) Generative model P(X Predictive modelling might be based on correlations If umbrellas in the street, Then it rains 3 / 27
4 The big data promise: ML models will expectedly support interventions: health and nutrition education economics/management climate Intervention Pearl 2009 Intervention do(x = x) forces variables X to value x Direct cause X i X j P Xj do(x i =x,x \ij =c) P Xj do(x i =x,x \ij =c) Example C: Cancer, S : Smoking, G : Genetic factors P(C do{s = 0, G = 0}) P(C do{s = 1, G = 0}) 4 / 27
5 Correlations do not support interventions Causal models are needed to support interventions 5 / 27
6 Why is this relevant to space weather? Causal models support understanding Causal models are more robust Given observations drawn after P(X ), P(Y X ), Find P(Y X ) that minimizes IE x P(X ) [arg max y ] P(y x) arg max P(y x) y e.g., to concept drift But P(X ) in production might differ from P(X ) in training 6 / 27
7 Causal modelling, how Historically, based on interventions. However, often impossible climate unethical make people smoking too expensive e.g., in economics Machine Learning alternatives Observational data Statistical tests Learned models Prior knowledge / Assumptions / Constraints 7 / 27
8 Motivation State of art Causal Generative Neural Nets Naive ML Approach to SW 8 / 27
9 Functional Causal Models, a.k.a. Structural Equation Models X i = f i (Pa(X i ), E i ) Pa(X i ): Direct causes for X i All unobserved influences: noise variables E i X 1 = f 1(E 1) X 2 = f 2(X 1, E 2) X 3 = f 3(X 1, E 3) X 4 = f 4(E 4) X 5 = f 5(X 3, X 4, E 5) Tasks Finding the structure of the graph (no cycles) Finding functions (f i ) 9 / 27
10 Conducting a causal modelling study Milestones Testing bivariate independence (statistical tests) find edges Conditional independence prune the edges Full causal graph modelling orient the edges X Y ; Y Z X Z Y X Y Z Challenges Computational complexity tractable approximation Conditional independence: data hungry tests Assuming causal sufficiency can be relaxed 10 / 27
11 X Y independance Categorical variables P(X, Y ) =?P(X ).P(Y ) Entropy H(X ) = x p(x)log(p(x)) x: value taken by X, p(x) its frequency Mutual information M(X, Y ) = H(X ) + H(Y ) H(X, Y ) Others: χ 2, G-test Continuous variables t-test, z-test Hilbert-Schmidt Independence Criterion (HSIC) Gretton et al., 05 Cov(f, g) = IE x,y [f (x)g(y)] IE x[f (x)]ie y [g(y)] Given f : X IR and g : Y IR Cov(f, g) = 0 for all f, g iff X and Y are independent 11 / 27
12 An ML approach Guyon et al, E = {(A i, B i, l i ), l i in {,, }} 12 / 27
13 Exploiting the distribution asymmetry Hoyer et al. 09; Mooij et al True model with noise ɛ independent on X Y = X + ɛ Learn Y = f (X ), plot the residual Y f (X ) Learn X = g(y ), plot the residual X g(y ) 13 / 27
14 Exploiting the asymmetry, 2 Given A, B 14 / 27
15 Exploiting the asymmetry, 2 Given A, B, Learn A = f (B) B = g(a) Retain model with best fit: A B 15 / 27
16 Exploiting the asymmetry, 2 Given A, B, Learn A = f (B) B = g(a) Retain model with best fit: A B A: Altitude of city, B: Temperature 15 / 27
17 Find V-structure: A C and A C B Explaining away causes 16 / 27
18 Motivation State of art Causal Generative Neural Nets Naive ML Approach to SW 17 / 27
19 Auto-Encoders Training set Structure of Auto-Encoder E = {(x i ), x i IR d, i = 1... n} Minimization of Mean Squared Error (MSE) Minimize i x i x i 2 Output: z, a compressed representation of x 18 / 27
20 Stacked Auto-Encoders E = {(x i ), x i IR d, i = 1... n} Differences Several hidden layers Minimize MSE or cross-entropy loss Minimize i,j x i,j log ˆx i,j + (1 x i,j ) log (1 ˆx i,j ) 19 / 27
21 Variational Auto-Encoders Kingma et al. 13 E = {(x i ), x i IR d, i = 1... n} Difference Hidden layer: parameters of a distribution N (µ, σ 2 ) Distribution used to generate values z = µ + σ N (0, 1) 20 / 27
22 Variational Auto-Encoders Kingma et al. 13 E = {(x i ), x i IR d, i = 1... n} Difference Hidden layer: parameters of a distribution N (µ, σ 2 ) Distribution used to generate values z = µ + σ N (0, 1) 21 / 27
23 Causal Generative Neural Nets E = {(x i ), x i IR d, i = 1... n} Goudet et al. 17 E = {(x i ), x i IR d, i = 1... n } Train the generator to minimize the distance between original and generated data in IR d MMD(G) = 1 k(x n 2 i, x j ) + 1 k(x n 2 i, x j) 2 1 k(x nn i, x j) i,j k(x, z) = i i,j exp γ i d x z 2 γ i in { } i,j 22 / 27
24 Relaxing the causal sufficiency assumption X 2 = f 2(E 2, E 2,3) X 3 = f 3(E 3, E 2,3, E 3,5) X 4 = f 4(E 4, E 4,5) X 5 = f 5(X 3, X 4, E 5, E 3,5, E 4,5) 23 / 27
25 Graph inference Results: Area under the precision/recall curve Algorithm G 2 G 3 G 4 Constraint-based PC-Gaussian 82.3 ±4 (87.8) 80.0 ±7 (89.2) 88.1 ±10 (95.7) PC-HSIC 93.4 ±3 (78.5) 93.0 ±4 (77.9) 98.9 ±2 (88.0) Score-based GES 75.3 ±7 (81.2) 73.6 ±7 (77.7) 69.3±11 (78.6) Pairwise orientation LiNGAM 64.4 ±4 (100) 71.1 ±1 (100) 71.6 ±7 (100) ANM 72.9 ±9 (100) 72.5 ±4 (100) 79.9 ±5 (100) Jarfo 69.9 ±9 (100) 87.3 ±3 (100) 88.5 ±5 (100) CGNN-Fourier 94.5 ±2 (100) 84.9 ±9 (100) 93.6 ±3 (100) CGNN-MMD 96.9 ±1 (100) 96.5 ±3 (100) 97.2 ±3 (100) Python framework available at : Caveat: up to 50 variables 24 / 27
26 Motivation State of art Causal Generative Neural Nets Naive ML Approach to SW 25 / 27
27 Compact solar state representations
28 Principle 9
29 Image preprocessing 10
30 Autoencoders Dimensionality reduction 11
31 Autoencoders Dimensionality reduction Input and Output similarity 11
32 Autoencoders Dimensionality reduction Input and Output similarity Bottleneck 11
33 Autoencoders Dimensionality reduction Input and Output similarity Bottleneck 256x
34 Autoencoders 512x
35 Autoencoders 256x
36 Variational Autoencoder Assumption on the latent space distribution 256x
37 Autoencoders training Intermediate image size 15
38 Autoencoders training Intermediate image size Custom loss : loss = (ytrue y pred ) 2 (y true+ɛ) α + (ytrue y pred ) 2 (1 y true+ɛ) α 15
39 Results Autoencoder Conv Conv + Dense Conv + PCA Variational Reduction rate 1/128 1/1024 1/524 1/728 Visual similarity 16
40 Results Autoencoder Conv Conv + Dense Conv + PCA Variational Reduction rate 1/128 1/1024 1/524 1/728 Visual similarity Smoothness over time 16
41 Results Autoencoder Conv Conv + Dense Conv + PCA Variational Reduction rate 1/128 1/1024 1/524 1/728 Visual similarity Smoothness over time Classification for verification 16
42 Results Event precision recall accuracy F1-score Coronal hole Lepping Pseudo streamer Strahl * Random predictor performances are for accuracy and 0.25 for the rest Only 8000 labeled images 17
43 Results Event precision recall accuracy F1-score Coronal hole Lepping Pseudo streamer Strahl * Random predictor performances are for accuracy and 0.25 for the rest Only 8000 labeled images Time distribution 17
44 Results Event precision recall accuracy F1-score Coronal hole Lepping Pseudo streamer Strahl * Random predictor performances are for accuracy and 0.25 for the rest Only 8000 labeled images Time distribution Prediction at L1 17
45 Results Event precision recall accuracy F1-score Coronal hole Lepping Pseudo streamer Strahl * Random predictor performances are for accuracy and 0.25 for the rest Only 8000 labeled images Time distribution Prediction at L1 Low performances 17
46 Results Event precision recall accuracy F1-score Coronal hole Lepping Pseudo streamer Strahl * Random predictor performances are for accuracy and 0.25 for the rest Only 8000 labeled images Time distribution Prediction at L1 Low performances Let s extract more information 17
47 Going further Classification of solar events More data Caveat: the train/test split Predicting data at L1 the propagation time from sun to L1 help needed! 26 / 27
48 Thanks Olivier Goudet, Diviyan Kalainathan, Isabelle Guyon, Aris Tritas Mhamed Hajaiej, Cyril Furtlehner, Aurélien Decelle 27 / 27
Deep Convolutional Neural Networks for Pairwise Causality
Deep Convolutional Neural Networks for Pairwise Causality Karamjit Singh, Garima Gupta, Lovekesh Vig, Gautam Shroff, and Puneet Agarwal TCS Research, Delhi Tata Consultancy Services Ltd. {karamjit.singh,
More informationarxiv: v1 [cs.lg] 3 Jan 2017
Deep Convolutional Neural Networks for Pairwise Causality Karamjit Singh, Garima Gupta, Lovekesh Vig, Gautam Shroff, and Puneet Agarwal TCS Research, New-Delhi, India January 4, 2017 arxiv:1701.00597v1
More informationDeep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang
Deep Learning Basics Lecture 7: Factor Analysis Princeton University COS 495 Instructor: Yingyu Liang Supervised v.s. Unsupervised Math formulation for supervised learning Given training data x i, y i
More informationHilbert Schmidt Independence Criterion
Hilbert Schmidt Independence Criterion Thanks to Arthur Gretton, Le Song, Bernhard Schölkopf, Olivier Bousquet Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia Alex.Smola@nicta.com.au
More informationUNSUPERVISED LEARNING
UNSUPERVISED LEARNING Topics Layer-wise (unsupervised) pre-training Restricted Boltzmann Machines Auto-encoders LAYER-WISE (UNSUPERVISED) PRE-TRAINING Breakthrough in 2006 Layer-wise (unsupervised) pre-training
More informationMACHINE LEARNING FOR CAUSE-EFFECT PAIRS DETECTION. Mehreen Saeed CLE Seminar 11 February, 2014.
MACHINE LEARNING FOR CAUSE-EFFECT PAIRS DETECTION Mehreen Saeed CLE Seminar 11 February, 214. WHY CAUSALITY. Polio drops can cause polio epidemics (The Nation, January 214) A supernova explosion causes
More informationDeep Solar Imaging for Geomagnetic Storms Prediction
Ecole polytechnique Research internship report Deep Solar Imaging for Geomagnetic Storms Prediction Student : Mhamed Hajaiej X2014 Supervisor : Cyril Furtlehner Referent : Yanlei Diao Organism : Inria
More informationMachine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.
Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted
More informationDistinguishing Cause from Effect Using Observational Data: Methods and Benchmarks
Journal of Machine Learning Research 17 (2016) 1-102 Submitted 12/14; Revised 12/15; Published 4/16 Distinguishing Cause from Effect Using Observational Data: Methods and Benchmarks Joris M. Mooij Institute
More informationBelief Propagation for Traffic forecasting
Belief Propagation for Traffic forecasting Cyril Furtlehner (INRIA Saclay - Tao team) context : Travesti project http ://travesti.gforge.inria.fr/) Anne Auger (INRIA Saclay) Dimo Brockhoff (INRIA Lille)
More informationProbabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016
Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier
More informationCOMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection
COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection Instructor: Herke van Hoof (herke.vanhoof@cs.mcgill.ca) Based on slides by:, Jackie Chi Kit Cheung Class web page:
More informationGenerative models for missing value completion
Generative models for missing value completion Kousuke Ariga Department of Computer Science and Engineering University of Washington Seattle, WA 98105 koar8470@cs.washington.edu Abstract Deep generative
More informationMachine Learning, Fall 2009: Midterm
10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all
More informationNaïve Bayes classification
Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss
More informationIntelligent Systems Discriminative Learning, Neural Networks
Intelligent Systems Discriminative Learning, Neural Networks Carsten Rother, Dmitrij Schlesinger WS2014/2015, Outline 1. Discriminative learning 2. Neurons and linear classifiers: 1) Perceptron-Algorithm
More informationLogistic Regression & Neural Networks
Logistic Regression & Neural Networks CMSC 723 / LING 723 / INST 725 Marine Carpuat Slides credit: Graham Neubig, Jacob Eisenstein Logistic Regression Perceptron & Probabilities What if we want a probability
More informationNotes on Machine Learning for and
Notes on Machine Learning for 16.410 and 16.413 (Notes adapted from Tom Mitchell and Andrew Moore.) Choosing Hypotheses Generally want the most probable hypothesis given the training data Maximum a posteriori
More informationRegression I: Mean Squared Error and Measuring Quality of Fit
Regression I: Mean Squared Error and Measuring Quality of Fit -Applied Multivariate Analysis- Lecturer: Darren Homrighausen, PhD 1 The Setup Suppose there is a scientific problem we are interested in solving
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationUnsupervised Neural Nets
Unsupervised Neural Nets (and ICA) Lyle Ungar (with contributions from Quoc Le, Socher & Manning) Lyle Ungar, University of Pennsylvania Semi-Supervised Learning Hypothesis:%P(c x)%can%be%more%accurately%computed%using%
More informationAdvances in kernel exponential families
Advances in kernel exponential families Arthur Gretton Gatsby Computational Neuroscience Unit, University College London NIPS, 2017 1/39 Outline Motivating application: Fast estimation of complex multivariate
More informationIs the test error unbiased for these programs?
Is the test error unbiased for these programs? Xtrain avg N o Preprocessing by de meaning using whole TEST set 2017 Kevin Jamieson 1 Is the test error unbiased for this program? e Stott see non for f x
More informationBayesian Support Vector Machines for Feature Ranking and Selection
Bayesian Support Vector Machines for Feature Ranking and Selection written by Chu, Keerthi, Ong, Ghahramani Patrick Pletscher pat@student.ethz.ch ETH Zurich, Switzerland 12th January 2006 Overview 1 Introduction
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College
More informationNaïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability
Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish
More informationGaussian Processes in Machine Learning
Gaussian Processes in Machine Learning November 17, 2011 CharmGil Hong Agenda Motivation GP : How does it make sense? Prior : Defining a GP More about Mean and Covariance Functions Posterior : Conditioning
More informationProbabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov
Probabilistic Graphical Models: MRFs and CRFs CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov Why PGMs? PGMs can model joint probabilities of many events. many techniques commonly
More informationCSC321 Lecture 20: Autoencoders
CSC321 Lecture 20: Autoencoders Roger Grosse Roger Grosse CSC321 Lecture 20: Autoencoders 1 / 16 Overview Latent variable models so far: mixture models Boltzmann machines Both of these involve discrete
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)
More informationClassification. The goal: map from input X to a label Y. Y has a discrete set of possible values. We focused on binary Y (values 0 or 1).
Regression and PCA Classification The goal: map from input X to a label Y. Y has a discrete set of possible values We focused on binary Y (values 0 or 1). But we also discussed larger number of classes
More informationBayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses
Bayesian Learning Two Roles for Bayesian Methods Probabilistic approach to inference. Quantities of interest are governed by prob. dist. and optimal decisions can be made by reasoning about these prob.
More informationRecap from previous lecture
Recap from previous lecture Learning is using past experience to improve future performance. Different types of learning: supervised unsupervised reinforcement active online... For a machine, experience
More informationCSCE 478/878 Lecture 6: Bayesian Learning
Bayesian Methods Not all hypotheses are created equal (even if they are all consistent with the training data) Outline CSCE 478/878 Lecture 6: Bayesian Learning Stephen D. Scott (Adapted from Tom Mitchell
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 295-P, Spring 213 Prof. Erik Sudderth Lecture 11: Inference & Learning Overview, Gaussian Graphical Models Some figures courtesy Michael Jordan s draft
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationFinal Overview. Introduction to ML. Marek Petrik 4/25/2017
Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,
More informationWHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY,
WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY, WITH IMPLICATIONS FOR TRAINING Sanjeev Arora, Yingyu Liang & Tengyu Ma Department of Computer Science Princeton University Princeton, NJ 08540, USA {arora,yingyul,tengyu}@cs.princeton.edu
More informationNonparameteric Regression:
Nonparameteric Regression: Nadaraya-Watson Kernel Regression & Gaussian Process Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro,
More informationMachine Learning 2nd Edition
INTRODUCTION TO Lecture Slides for Machine Learning 2nd Edition ETHEM ALPAYDIN, modified by Leonardo Bobadilla and some parts from http://www.cs.tau.ac.il/~apartzin/machinelearning/ The MIT Press, 2010
More informationInformation Theory. Coding and Information Theory. Information Theory Textbooks. Entropy
Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is
More informationPCA and admixture models
PCA and admixture models CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar, Alkes Price PCA and admixture models 1 / 57 Announcements HW1
More informationthe Information Bottleneck
the Information Bottleneck Daniel Moyer December 10, 2017 Imaging Genetics Center/Information Science Institute University of Southern California Sorry, no Neuroimaging! (at least not presented) 0 Instead,
More informationNeural Networks. William Cohen [pilfered from: Ziv; Geoff Hinton; Yoshua Bengio; Yann LeCun; Hongkak Lee - NIPs 2010 tutorial ]
Neural Networks William Cohen 10-601 [pilfered from: Ziv; Geoff Hinton; Yoshua Bengio; Yann LeCun; Hongkak Lee - NIPs 2010 tutorial ] WHAT ARE NEURAL NETWORKS? William s notation Logis;c regression + 1
More informationChapter 16. Structured Probabilistic Models for Deep Learning
Peng et al.: Deep Learning and Practice 1 Chapter 16 Structured Probabilistic Models for Deep Learning Peng et al.: Deep Learning and Practice 2 Structured Probabilistic Models way of using graphs to describe
More informationKernel Learning via Random Fourier Representations
Kernel Learning via Random Fourier Representations L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Module 5: Machine Learning L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Kernel Learning via Random
More informationDistinguishing Causes from Effects using Nonlinear Acyclic Causal Models
Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models Kun Zhang Dept of Computer Science and HIIT University of Helsinki 14 Helsinki, Finland kun.zhang@cs.helsinki.fi Aapo Hyvärinen
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More information11. Learning graphical models
Learning graphical models 11-1 11. Learning graphical models Maximum likelihood Parameter learning Structural learning Learning partially observed graphical models Learning graphical models 11-2 statistical
More informationCausation and Prediction (2007) Neural Connectomics (2014) Pot-luck challenge (2008) Causality challenges Isabelle Guyon
Isabelle Guyon Causation and Prediction (2007) Fast Cause-Effect causation coefficient Pairs (2013) (2014) Pot-luck challenge (2008) Neural Connectomics (2014) Causality challenges Isabelle Guyon Initial
More informationTensor Methods for Feature Learning
Tensor Methods for Feature Learning Anima Anandkumar U.C. Irvine Feature Learning For Efficient Classification Find good transformations of input for improved classification Figures used attributed to
More informationCOMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017
COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University FEATURE EXPANSIONS FEATURE EXPANSIONS
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationProbabilistic Graphical Models
Probabilistic Graphical Models David Sontag New York University Lecture 4, February 16, 2012 David Sontag (NYU) Graphical Models Lecture 4, February 16, 2012 1 / 27 Undirected graphical models Reminder
More informationInformation Theory and Feature Selection (Joint Informativeness and Tractability)
Information Theory and Feature Selection (Joint Informativeness and Tractability) Leonidas Lefakis Zalando Research Labs 1 / 66 Dimensionality Reduction Feature Construction Construction X 1,..., X D f
More informationConditional Independence
Conditional Independence Sargur Srihari srihari@cedar.buffalo.edu 1 Conditional Independence Topics 1. What is Conditional Independence? Factorization of probability distribution into marginals 2. Why
More informationEnergy Based Models. Stefano Ermon, Aditya Grover. Stanford University. Lecture 13
Energy Based Models Stefano Ermon, Aditya Grover Stanford University Lecture 13 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 13 1 / 21 Summary Story so far Representation: Latent
More informationarxiv: v1 [stat.ml] 13 Mar 2018
SAM: Structural Agnostic Model, Causal Discovery and Penalized Adversarial Learning arxiv:1803.04929v1 [stat.ml] 13 Mar 2018 Diviyan Kalainathan 1, Olivier Goudet 1, Isabelle Guyon 1, David Lopez-Paz 2,
More informationCSC321 Lecture 18: Learning Probabilistic Models
CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling
More informationNonlinear Statistical Learning with Truncated Gaussian Graphical Models
Nonlinear Statistical Learning with Truncated Gaussian Graphical Models Qinliang Su, Xuejun Liao, Changyou Chen, Lawrence Carin Department of Electrical & Computer Engineering, Duke University Presented
More informationIntroduction to Machine Learning
Introduction to Machine Learning CS4731 Dr. Mihail Fall 2017 Slide content based on books by Bishop and Barber. https://www.microsoft.com/en-us/research/people/cmbishop/ http://web4.cs.ucl.ac.uk/staff/d.barber/pmwiki/pmwiki.php?n=brml.homepage
More informationGreedy Layer-Wise Training of Deep Networks
Greedy Layer-Wise Training of Deep Networks Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle NIPS 2007 Presented by Ahmed Hefny Story so far Deep neural nets are more expressive: Can learn
More informationGenerative Models for Sentences
Generative Models for Sentences Amjad Almahairi PhD student August 16 th 2014 Outline 1. Motivation Language modelling Full Sentence Embeddings 2. Approach Bayesian Networks Variational Autoencoders (VAE)
More informationLecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.
Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods
More informationHomework 1 Solutions Probability, Maximum Likelihood Estimation (MLE), Bayes Rule, knn
Homework 1 Solutions Probability, Maximum Likelihood Estimation (MLE), Bayes Rule, knn CMU 10-701: Machine Learning (Fall 2016) https://piazza.com/class/is95mzbrvpn63d OUT: September 13th DUE: September
More informationReward-modulated inference
Buck Shlegeris Matthew Alger COMP3740, 2014 Outline Supervised, unsupervised, and reinforcement learning Neural nets RMI Results with RMI Types of machine learning supervised unsupervised reinforcement
More informationHoldout and Cross-Validation Methods Overfitting Avoidance
Holdout and Cross-Validation Methods Overfitting Avoidance Decision Trees Reduce error pruning Cost-complexity pruning Neural Networks Early stopping Adjusting Regularizers via Cross-Validation Nearest
More informationProbabilistic Graphical Models & Applications
Probabilistic Graphical Models & Applications Learning of Graphical Models Bjoern Andres and Bernt Schiele Max Planck Institute for Informatics The slides of today s lecture are authored by and shown with
More informationSemi-Supervised Learning in Gigantic Image Collections. Rob Fergus (New York University) Yair Weiss (Hebrew University) Antonio Torralba (MIT)
Semi-Supervised Learning in Gigantic Image Collections Rob Fergus (New York University) Yair Weiss (Hebrew University) Antonio Torralba (MIT) Gigantic Image Collections What does the world look like? High
More informationApproximation Theoretical Questions for SVMs
Ingo Steinwart LA-UR 07-7056 October 20, 2007 Statistical Learning Theory: an Overview Support Vector Machines Informal Description of the Learning Goal X space of input samples Y space of labels, usually
More informationDistinguishing Causes from Effects using Nonlinear Acyclic Causal Models
JMLR Workshop and Conference Proceedings 6:17 164 NIPS 28 workshop on causality Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models Kun Zhang Dept of Computer Science and HIIT University
More informationRegression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning)
Linear Regression Regression Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Example: Height, Gender, Weight Shoe Size Audio features
More informationClustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.
Clustering K-means Machine Learning CSE546 Carlos Guestrin University of Washington November 4, 2014 1 Clustering images Set of Images [Goldberger et al.] 2 1 K-means Randomly initialize k centers µ (0)
More informationIntroduction to Gaussian Process
Introduction to Gaussian Process CS 778 Chris Tensmeyer CS 478 INTRODUCTION 1 What Topic? Machine Learning Regression Bayesian ML Bayesian Regression Bayesian Non-parametric Gaussian Process (GP) GP Regression
More informationWhy is Deep Learning so effective?
Ma191b Winter 2017 Geometry of Neuroscience The unreasonable effectiveness of deep learning This lecture is based entirely on the paper: Reference: Henry W. Lin and Max Tegmark, Why does deep and cheap
More informationRegression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning)
Linear Regression Regression Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Example: Height, Gender, Weight Shoe Size Audio features
More informationSum-Product Networks: A New Deep Architecture
Sum-Product Networks: A New Deep Architecture Pedro Domingos Dept. Computer Science & Eng. University of Washington Joint work with Hoifung Poon 1 Graphical Models: Challenges Bayesian Network Markov Network
More informationCausal Inference via Kernel Deviance Measures
Causal Inference via Kernel Deviance Measures Jovana Mitrovic Department of Statistics University of Oxford Dino Sejdinovic Department of Statistics University of Oxford Yee Whye Teh Department of Statistics
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationECE521 Lecture7. Logistic Regression
ECE521 Lecture7 Logistic Regression Outline Review of decision theory Logistic regression A single neuron Multi-class classification 2 Outline Decision theory is conceptually easy and computationally hard
More informationBayesian Networks Inference with Probabilistic Graphical Models
4190.408 2016-Spring Bayesian Networks Inference with Probabilistic Graphical Models Byoung-Tak Zhang intelligence Lab Seoul National University 4190.408 Artificial (2016-Spring) 1 Machine Learning? Learning
More informationMAP Examples. Sargur Srihari
MAP Examples Sargur srihari@cedar.buffalo.edu 1 Potts Model CRF for OCR Topics Image segmentation based on energy minimization 2 Examples of MAP Many interesting examples of MAP inference are instances
More informationINTRODUCTION TO DATA SCIENCE
INTRODUCTION TO DATA SCIENCE JOHN P DICKERSON Lecture #13 3/9/2017 CMSC320 Tuesdays & Thursdays 3:30pm 4:45pm ANNOUNCEMENTS Mini-Project #1 is due Saturday night (3/11): Seems like people are able to do
More informationVariational Autoencoders (VAEs)
September 26 & October 3, 2017 Section 1 Preliminaries Kullback-Leibler divergence KL divergence (continuous case) p(x) andq(x) are two density distributions. Then the KL-divergence is defined as Z KL(p
More informationLinear discriminant functions
Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative
More informationRegML 2018 Class 8 Deep learning
RegML 2018 Class 8 Deep learning Lorenzo Rosasco UNIGE-MIT-IIT June 18, 2018 Supervised vs unsupervised learning? So far we have been thinking of learning schemes made in two steps f(x) = w, Φ(x) F, x
More informationLecture 16 Deep Neural Generative Models
Lecture 16 Deep Neural Generative Models CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 22, 2017 Approach so far: We have considered simple models and then constructed
More informationMachine Learning Lecture 5
Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory
More informationProbabilistic Graphical Models for Image Analysis - Lecture 1
Probabilistic Graphical Models for Image Analysis - Lecture 1 Alexey Gronskiy, Stefan Bauer 21 September 2018 Max Planck ETH Center for Learning Systems Overview 1. Motivation - Why Graphical Models 2.
More informationReading Group on Deep Learning Session 4 Unsupervised Neural Networks
Reading Group on Deep Learning Session 4 Unsupervised Neural Networks Jakob Verbeek & Daan Wynen 206-09-22 Jakob Verbeek & Daan Wynen Unsupervised Neural Networks Outline Autoencoders Restricted) Boltzmann
More informationLecture 9: PGM Learning
13 Oct 2014 Intro. to Stats. Machine Learning COMP SCI 4401/7401 Table of Contents I Learning parameters in MRFs 1 Learning parameters in MRFs Inference and Learning Given parameters (of potentials) and
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationLearning Deep Architectures
Learning Deep Architectures Yoshua Bengio, U. Montreal Microsoft Cambridge, U.K. July 7th, 2009, Montreal Thanks to: Aaron Courville, Pascal Vincent, Dumitru Erhan, Olivier Delalleau, Olivier Breuleux,
More informationArtificial Intelligence
Artificial Intelligence Explainable AI Marc Toussaint University of Stuttgart Winter 2018/19 Explainable AI General Concept of Explaination Data, Objective, Method, & Input Counterfactuals & Pearl Fitting
More informationIntroduction to Causal Calculus
Introduction to Causal Calculus Sanna Tyrväinen University of British Columbia August 1, 2017 1 / 1 2 / 1 Bayesian network Bayesian networks are Directed Acyclic Graphs (DAGs) whose nodes represent random
More informationApproximate Inference Part 1 of 2
Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ Bayesian paradigm Consistent use of probability theory
More informationProbabilistic Graphical Models. Guest Lecture by Narges Razavian Machine Learning Class April
Probabilistic Graphical Models Guest Lecture by Narges Razavian Machine Learning Class April 14 2017 Today What is probabilistic graphical model and why it is useful? Bayesian Networks Basic Inference
More informationCOMS 4771 Lecture Course overview 2. Maximum likelihood estimation (review of some statistics)
COMS 4771 Lecture 1 1. Course overview 2. Maximum likelihood estimation (review of some statistics) 1 / 24 Administrivia This course Topics http://www.satyenkale.com/coms4771/ 1. Supervised learning Core
More information