MMD GAN 1 Fisher GAN 2
|
|
- Garry Small
- 5 years ago
- Views:
Transcription
1 MMD GAN 1 Fisher GAN 1 Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos (CMU, IBM Research) Youssef Mroueh, and Tom Sercu (IBM Research) Presented by Rui-Yi(Roy) Zhang Decemeber 1, 017
2 Outline 1 Overview MMD GAN 3 Fisher GAN 4 Conclusion
3 Preliminaries: Generative Adversarial Nets (GANs) 1 Real data distribution P X ; Fake data distribution P θ, implemented via a generator as x = G(z), z P (z), where z Z; Objective: min G max D V (D, G) Discriminator D : X [0, 1] L d = E x PX [log D(x)] E x Pθ [log(1 D(x))] (1) D(x): the probability that x from the real data rather than generator. Generator G : Z X L g = E x Pθ [log(1 D(x))] = E z P (z) [log(1 D(G(z)))] ()
4 Maximum Mean Discrepancy (MMD) Given two distributions P and Q, and a kernel κ, the square of MMD distance is defined as: M κ (P, Q) = µ P µ Q H (3) = E P [κ(x, x )] E P,Q [κ(x, y)] + E Q [κ(y, y )] (4) Given a kernel κ, if κ is a characteristic kernel, then M κ (P, Q) = 0 iff P = Q. (one kind of two sample test) In practice we use finite samples from distributions to estimate MMD distance. Given X = {x 1,, x n } P and Y = {y 1,, y n } Q, one estimator of M κ (P, Q) is ˆM κ (X, Y ) = ( 1 n ) κ(x i, x i) ( n ) i i i j κ(x i, y j ) + 1 ( n ) j j κ(y j, y j) (5)
5 MMD with Kernel Learning GMMN [ ] trains g θ with a pre-specified kernel κ. min θ M κ (P X, P θ ), (6) MMD GAN trains g θ taking different possible characteristic kernel (difficult to optimize, K is a possible kernel set.) min max M κ(p X, P θ ), (7) θ κ K f φ is a injective function and κ is characteristic, resulting characteristic kernel κ = κ fφ. Practically MMD GAN chooses Gaussian Kernels. κ(x, x ) = exp( f φ (x) f φ (x ) ) [ ] Li, Yujia, Kevin Swersky, and Rich Zemel. "Generative moment matching networks." ICML 015
6 MMD GAN Assume g θ is locally Lipschitz; the gradient θ (max φ f φ g θ ) has to be bounded: weight clipping for Lipschitz approximations Approximate injective function f φ by an autoencoder. The objective is relaxed to be min max M f κ φe (P(X ), P(g θ (Z))) λe y X g(z) y f φd (f φe (y)). θ φ E1: kernel selection via learning. E: f φe as a feature transformation function; The kernel two-sample test is performed on the code space.
7 MMD GAN Algorithm 1: MMD GAN input : α the learning rate, c the clipping parameter, B the batch size, n c the number of iterations of discriminator per generator update. initialize generator parameter θ and discriminator parameter φ; while θ has not converged do for t = 1,..., n c do Sample a minibatches {x i } B i=1 P(X ) and {z j} B j=1 P(Z) g φ φ M fφe (P(X ), P(g θ (Z))) λe y X g(z) y f φd (f φe (y)) φ φ + α RMSProp(φ, g φ ) φ clip(φ, c, c) Sample a minibatches {x i } B i=1 P(X ) and {z j} B j=1 P(Z) g θ θ M fφe (P(X ), P(g θ (Z))) θ θ α RMSProp(θ, g θ )
8 Overview MMD GAN Fisher GAN MMD GAN Experiments I (a) WGAN MNIST (b) WGAN CelebA (c) WGAN LSUN (d) MMD GAN MNIST (e) MMD GAN CelebA (f) MMD GAN LSUN Conclusion
9 Overview MMD GAN Fisher GAN Conclusion MMD GAN Experiments II Method Real data DFM ALI Improved GANs MMD GAN WGAN GMMN-C GMMN-D Scores ± std ± ± ± ± ±.03 Table: Inception scores (a) MNIST Table: Computation time (b) CelebA (c) LSUN Bedrooms
10 Overview MMD GAN Fisher GAN Gradient penalty & without reconstruction loss (a) Cifar10 (b) CelebA Figure: MMD GAN results using gradient penalty and without auto-encoder reconstruction loss during training. Conclusion
11 Fisher s Linear Discriminant Analysis (LDA) Utilize the label information (fake or real in GAN) in finding informative projections. Two-class Fisher LDA considers maximizing the following objective: J(v) = v S B v v (8) S W v where S B is the between classes scatter matrix and S W is the within classes scatter matrix Optimization: S B = (µ 1 µ )(µ 1 µ ) (9) S W = (x i µ i )(x i µ i ) (10) i {1,} min v 1 v S B v (11) s.t. v S W v = 1 (1)
12 Fisher IPM Integral probability Metrics (IPM) framework: Given two probability distributions P, Q P(X ), the IPM indexed by a symmetric function space F is defined as follows: d F (P, Q) = sup f F { } E f(x) E f(x). (13) x P x Q The Fisher IPM for a function space F is defined as follows: E [f(x)] E [f(x)] x P x Q d F (P, Q) = sup f F 1 E x Pf (x) + 1. (14) E x Qf (x) The constrained form: d F (P, Q) = sup f F, 1 E x Pf (x)+ 1 E x Qf (x)=1 E(f) := E x P [f(x)] E x Q [f(x)]. (15)
13 Fisher GAN Generator g θ minimize Fisher IPM: min gθ d Fp (P X, P θ ). Given samples {x i, 1... N} from P X and samples {z i, 1... M} from p z we shall solve the following empirical problem: min g θ sup f p F p Ê(f p, g θ ) : = 1 N Subject to ˆΩ(f p, g θ ) = 1 N N f p (x i ) 1 M i=1 N i=1 M f p (g θ (z j )) (16) j=1 f p (x i ) + 1 M M fp (g θ (z j )) = 1 j=1 Fisher GAN with Augmented Lagrangian (ALM): (17) L F (p, θ, λ) = Ê(f p, g θ )+λ(1 ˆΩ(f p, g θ )) ρ (ˆΩ(f p, g θ ) 1) (18) where λ is the Lagrange multiplier and ρ > 0 (hyper-parameter) is the quadratic penalty weight.
14 Fisher GAN
15 Fisher IPM Interpretations A whitened mean matching interpretation. Consider the function space F v,ω : F v,ω = {f(x) = v, Φ ω (x) v R m, Φ ω : X R m }, The mean and covariance feature embedding as in McGan: µ ω (P) = E x P (Φ ω (x)) and Σ ω (P) = E x P ( Φω (x)φ ω (x) ), Fisher IPM on F v,ω can be written as follows: d Fv,ω (P, Q) = max max ω v v, µ ω (P) µ ω (Q) v ( 1 Σ ω(p) + 1 Σ ω(q) + γi m )v, (19) Mean matching with a Mahalanobis distance d Fv,ω (P, Q) = max (µ ω (P) µ ω (Q)) Σ 1 ω (P; Q)(µ ω (P) µ ω (Q)), ω
16 Fisher IPM Interpretations Real P x! v Q!(x) R m Fake Figure: Illustration of Fisher IPM with Neural Networks. Φ ω is a convolutional neural network which defines the embedding space. v is the direction in this embedding space with maximal mean separation v, µ ω (P) µ ω (Q), constrained by the hyperellipsoid v Σ ω (P; Q) v = 1.
17 Fisher GAN Theory Theorem (Chi-squared distance at full capacity) Consider the Fisher IPM for F being the space of all measurable functions endowed by 1 (P + Q), i.e. F := L (X, P+Q ). Define the Chi-squared distance between two distributions: χ (P, Q) = X (P(x) Q(x)) dx (0) P(x)+Q(x) The following holds true for any P, Q, P Q: 1) The Fisher IPM for F = L (X, P+Q ) is equal to the Chi-squared distance defined above: d F (P, Q) = χ (P, Q). ) The optimal critic of the Fisher IPM on L (X, P+Q ) is : f χ (x) = 1 χ (P, Q) P(x) Q(x). P(x)+Q(x)
18 Overview MMD GAN Fisher GAN Conclusion Fisher GAN Experiments I E train λ Ω 3 Ω λ gθ iterations Ω λ E train E val 4 0 Mean difference E (c) CIFAR-10 (b) CelebA E train E val Mean difference E Mean difference E (a) LSUN 4 1 gθ iterations gθ iterations Figure: Samples and plots of the loss E (.), lagrange multiplier λ, and constraint Ω (.) on 3 benchmark datasets. We see that during training as λ grows slowly, the constraint becomes tight.
19 Fisher GAN Experiments II Table: CIFAR-10 inception scores; Layer Normalization (LN) with resnets. Method Score Method Score ALI 5.34 ±.05 BEGAN 5.6 DCGAN 6.16 ±.07 Improved GAN 6.86 ±.06 EGAN-Ent-VI 7.07 ±.10 DFM 7.7 ±.13 WGAN-GP ResNet 7.86 ±.07 Fisher GAN ResNet 7.90 ±.05 Unsupervised SteinGan 6.35 DCGAN (with labels) 6.58 Improved GAN 8.09 ±.07 Fisher GAN ResNet 8.16 ±.1 AC-GAN 8.5 ±.07 SGAN-no-joint 8.37 ±.08 WGAN-GP ResNet 8.4 ±.10 SGAN 8.59 ±.1 Supervised
20 Conclusion Table: Comparison of GANs Stability Unconstrained Efficient Representation capacity Computation power (SSL) Standard GAN WGAN, McGan WGAN-GP? MMD Gan? Fisher Gan
Energy-Based Generative Adversarial Network
Energy-Based Generative Adversarial Network Energy-Based Generative Adversarial Network J. Zhao, M. Mathieu and Y. LeCun Learning to Draw Samples: With Application to Amoritized MLE for Generalized Adversarial
More informationarxiv: v7 [cs.lg] 27 Jul 2018
How Generative Adversarial Networks and Their Variants Work: An Overview of GAN Yongjun Hong, Uiwon Hwang, Jaeyoon Yoo and Sungroh Yoon Department of Electrical & Computer Engineering Seoul National University,
More informationFisher GAN. Abstract. 1 Introduction
Fisher GAN Youssef Mroueh, Tom Sercu mroueh@us.ibm.com, tom.sercu@ibm.com Equal Contribution AI Foundations, IBM Research AI IBM T.J Watson Research Center Abstract Generative Adversarial Networks (GANs)
More informationMMD GAN: Towards Deeper Understanding of Moment Matching Network
MMD GAN: Towards Deeper Understanding of Moment Matching Network Chun-Liang Li 1, Wei-Cheng Chang 1, Yu Cheng 2 Yiming Yang 1 Barnabás Póczos 1 1 Carnegie Mellon University, 2 AI Foundations, IBM Research
More informationNishant Gurnani. GAN Reading Group. April 14th, / 107
Nishant Gurnani GAN Reading Group April 14th, 2017 1 / 107 Why are these Papers Important? 2 / 107 Why are these Papers Important? Recently a large number of GAN frameworks have been proposed - BGAN, LSGAN,
More informationMMD GAN: Towards Deeper Understanding of Moment Matching Network
MMD GAN: Towards Deeper Understanding of Moment Matching Network Chun-Liang Li 1, Wei-Cheng Chang 1, Yu Cheng 2 Yiming Yang 1 Barnabás Póczos 1 1 Carnegie Mellon University, 2 AI Foundations, IBM Research
More informationGenerative Adversarial Networks
Generative Adversarial Networks Stefano Ermon, Aditya Grover Stanford University Lecture 10 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 10 1 / 17 Selected GANs https://github.com/hindupuravinash/the-gan-zoo
More informationRobustness in GANs and in Black-box Optimization
Robustness in GANs and in Black-box Optimization Stefanie Jegelka MIT CSAIL joint work with Zhi Xu, Chengtao Li, Ilija Bogunovic, Jonathan Scarlett and Volkan Cevher Robustness in ML noise Generator Critic
More informationLearning to Sample Using Stein Discrepancy
Learning to Sample Using Stein Discrepancy Dilin Wang Yihao Feng Qiang Liu Department of Computer Science Dartmouth College Hanover, NH 03755 {dilin.wang.gr, yihao.feng.gr, qiang.liu}@dartmouth.edu Abstract
More informationLecture 14: Deep Generative Learning
Generative Modeling CSED703R: Deep Learning for Visual Recognition (2017F) Lecture 14: Deep Generative Learning Density estimation Reconstructing probability density function using samples Bohyung Han
More informationarxiv: v2 [cs.lg] 8 Jun 2017
Youssef Mroueh * 1 2 Tom Sercu * 1 2 Vaibhava Goel 2 arxiv:1702.08398v2 [cs.lg] 8 Jun 2017 Abstract We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks
More informationarxiv: v1 [stat.ml] 19 Jan 2018
Composite Functional Gradient Learning of Generative Adversarial Models arxiv:80.06309v [stat.ml] 9 Jan 208 Rie Johnson RJ Research Consulting Tarrytown, NY, USA riejohnson@gmail.com Abstract Tong Zhang
More informationWasserstein GAN. Juho Lee. Jan 23, 2017
Wasserstein GAN Juho Lee Jan 23, 2017 Wasserstein GAN (WGAN) Arxiv submission Martin Arjovsky, Soumith Chintala, and Léon Bottou A new GAN model minimizing the Earth-Mover s distance (Wasserstein-1 distance)
More informationAuto-Encoding Variational Bayes
Auto-Encoding Variational Bayes Diederik P Kingma, Max Welling June 18, 2018 Diederik P Kingma, Max Welling Auto-Encoding Variational Bayes June 18, 2018 1 / 39 Outline 1 Introduction 2 Variational Lower
More informationInternal Covariate Shift Batch Normalization Implementation Experiments. Batch Normalization. Devin Willmott. University of Kentucky.
Batch Normalization Devin Willmott University of Kentucky October 23, 2017 Overview 1 Internal Covariate Shift 2 Batch Normalization 3 Implementation 4 Experiments Covariate Shift Suppose we have two distributions,
More informationGenerative adversarial networks
14-1: Generative adversarial networks Prof. J.C. Kao, UCLA Generative adversarial networks Why GANs? GAN intuition GAN equilibrium GAN implementation Practical considerations Much of these notes are based
More informationarxiv: v3 [cs.lg] 2 Nov 2018
PacGAN: The power of two samples in generative adversarial networks Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh Carnegie Mellon University, University of Illinois at Urbana-Champaign arxiv:72.486v3
More informationUnderstanding GANs: Back to the basics
Understanding GANs: Back to the basics David Tse Stanford University Princeton University May 15, 2018 Joint work with Soheil Feizi, Farzan Farnia, Tony Ginart, Changho Suh and Fei Xia. GANs at NIPS 2017
More informationGENERATIVE ADVERSARIAL LEARNING
GENERATIVE ADVERSARIAL LEARNING OF MARKOV CHAINS Jiaming Song, Shengjia Zhao & Stefano Ermon Computer Science Department Stanford University {tsong,zhaosj12,ermon}@cs.stanford.edu ABSTRACT We investigate
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationarxiv: v4 [cs.cv] 5 Sep 2018
Wasserstein Divergence for GANs Jiqing Wu 1, Zhiwu Huang 1, Janine Thoma 1, Dinesh Acharya 1, and Luc Van Gool 1,2 arxiv:1712.01026v4 [cs.cv] 5 Sep 2018 1 Computer Vision Lab, ETH Zurich, Switzerland {jwu,zhiwu.huang,jthoma,vangool}@vision.ee.ethz.ch,
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 20: Expectation Maximization Algorithm EM for Mixture Models Many figures courtesy Kevin Murphy s
More informationDeep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks Emily Denton 1, Soumith Chintala 2, Arthur Szlam 2, Rob Fergus 2 1 New York University 2 Facebook AI Research Denotes equal
More informationA QUANTITATIVE MEASURE OF GENERATIVE ADVERSARIAL NETWORK DISTRIBUTIONS
A QUANTITATIVE MEASURE OF GENERATIVE ADVERSARIAL NETWORK DISTRIBUTIONS Dan Hendrycks University of Chicago dan@ttic.edu Steven Basart University of Chicago xksteven@uchicago.edu ABSTRACT We introduce a
More informationComposite Functional Gradient Learning of Generative Adversarial Models. Appendix
A. Main theorem and its proof Appendix Theorem A.1 below, our main theorem, analyzes the extended KL-divergence for some β (0.5, 1] defined as follows: L β (p) := (βp (x) + (1 β)p(x)) ln βp (x) + (1 β)p(x)
More informationGenerative Adversarial Networks (GANs) Ian Goodfellow, OpenAI Research Scientist Presentation at Berkeley Artificial Intelligence Lab,
Generative Adversarial Networks (GANs) Ian Goodfellow, OpenAI Research Scientist Presentation at Berkeley Artificial Intelligence Lab, 2016-08-31 Generative Modeling Density estimation Sample generation
More informationProbabilistic Reasoning in Deep Learning
Probabilistic Reasoning in Deep Learning Dr Konstantina Palla, PhD palla@stats.ox.ac.uk September 2017 Deep Learning Indaba, Johannesburgh Konstantina Palla 1 / 39 OVERVIEW OF THE TALK Basics of Bayesian
More informationarxiv: v3 [stat.ml] 20 Feb 2018
MANY PATHS TO EQUILIBRIUM: GANS DO NOT NEED TO DECREASE A DIVERGENCE AT EVERY STEP William Fedus 1, Mihaela Rosca 2, Balaji Lakshminarayanan 2, Andrew M. Dai 1, Shakir Mohamed 2 and Ian Goodfellow 1 1
More informationTraining Generative Adversarial Networks Via Turing Test
raining enerative Adversarial Networks Via uring est Jianlin Su School of Mathematics Sun Yat-sen University uangdong, China bojone@spaces.ac.cn Abstract In this article, we introduce a new mode for training
More informationLearning Interpretable Features to Compare Distributions
Learning Interpretable Features to Compare Distributions Arthur Gretton Gatsby Computational Neuroscience Unit, University College London Theory of Big Data, 2017 1/41 Goal of this talk Given: Two collections
More informationB PROOF OF LEMMA 1. Published as a conference paper at ICLR 2018
A ADVERSARIAL DOMAIN ADAPTATION (ADA ADA aims to transfer prediction nowledge learned from a source domain with labeled data to a target domain without labels, by learning domain-invariant features. Let
More informationGreedy Layer-Wise Training of Deep Networks
Greedy Layer-Wise Training of Deep Networks Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle NIPS 2007 Presented by Ahmed Hefny Story so far Deep neural nets are more expressive: Can learn
More informationLearning features to compare distributions
Learning features to compare distributions Arthur Gretton Gatsby Computational Neuroscience Unit, University College London NIPS 2016 Workshop on Adversarial Learning, Barcelona Spain 1/28 Goal of this
More informationNegative Momentum for Improved Game Dynamics
Negative Momentum for Improved Game Dynamics Gauthier Gidel Reyhane Askari Hemmat Mohammad Pezeshki Gabriel Huang Rémi Lepriol Simon Lacoste-Julien Ioannis Mitliagkas Mila & DIRO, Université de Montréal
More informationVariational Autoencoder
Variational Autoencoder Göker Erdo gan August 8, 2017 The variational autoencoder (VA) [1] is a nonlinear latent variable model with an efficient gradient-based training procedure based on variational
More informationThe role of dimensionality reduction in classification
The role of dimensionality reduction in classification Weiran Wang and Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://eecs.ucmerced.edu
More informationImportance Reweighting Using Adversarial-Collaborative Training
Importance Reweighting Using Adversarial-Collaborative Training Yifan Wu yw4@andrew.cmu.edu Tianshu Ren tren@andrew.cmu.edu Lidan Mu lmu@andrew.cmu.edu Abstract We consider the problem of reweighting a
More informationMultiplicative Noise Channel in Generative Adversarial Networks
Multiplicative Noise Channel in Generative Adversarial Networks Xinhan Di Deepearthgo Deepearthgo@gmail.com Pengqian Yu National University of Singapore yupengqian@u.nus.edu Abstract Additive Gaussian
More informationarxiv: v3 [stat.ml] 15 Oct 2017
Non-parametric estimation of Jensen-Shannon Divergence in Generative Adversarial Network training arxiv:1705.09199v3 [stat.ml] 15 Oct 2017 Mathieu Sinn IBM Research Ireland Mulhuddart, Dublin 15, Ireland
More informationf-gan: Training Generative Neural Samplers using Variational Divergence Minimization
f-gan: Training Generative Neural Samplers using Variational Divergence Minimization Sebastian Nowozin, Botond Cseke, Ryota Tomioka Machine Intelligence and Perception Group Microsoft Research {Sebastian.Nowozin,
More informationCAUSAL GAN: LEARNING CAUSAL IMPLICIT GENERATIVE MODELS WITH ADVERSARIAL TRAINING
CAUSAL GAN: LEARNING CAUSAL IMPLICIT GENERATIVE MODELS WITH ADVERSARIAL TRAINING (Murat Kocaoglu, Christopher Snyder, Alexandros G. Dimakis & Sriram Vishwanath, 2017) Summer Term 2018 Created for the Seminar
More informationCMU-Q Lecture 24:
CMU-Q 15-381 Lecture 24: Supervised Learning 2 Teacher: Gianni A. Di Caro SUPERVISED LEARNING Hypotheses space Hypothesis function Labeled Given Errors Performance criteria Given a collection of input
More informationLecture 35: December The fundamental statistical distances
36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose
More informationSummary of A Few Recent Papers about Discrete Generative models
Summary of A Few Recent Papers about Discrete Generative models Presenter: Ji Gao Department of Computer Science, University of Virginia https://qdata.github.io/deep2read/ Outline SeqGAN BGAN: Boundary
More informationarxiv: v1 [cs.lg] 20 Apr 2017
Softmax GAN Min Lin Qihoo 360 Technology co. ltd Beijing, China, 0087 mavenlin@gmail.com arxiv:704.069v [cs.lg] 0 Apr 07 Abstract Softmax GAN is a novel variant of Generative Adversarial Network (GAN).
More informationKernel Methods. Lecture 4: Maximum Mean Discrepancy Thanks to Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, Jiayuan Huang, Arthur Gretton
Kernel Methods Lecture 4: Maximum Mean Discrepancy Thanks to Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, Jiayuan Huang, Arthur Gretton Alexander J. Smola Statistical Machine Learning Program Canberra,
More informationarxiv: v3 [cs.lg] 30 Jan 2018
COULOMB GANS: PROVABLY OPTIMAL NASH EQUI- LIBRIA VIA POTENTIAL FIELDS Thomas Unterthiner 1 Bernhard Nessler 1 Calvin Seward 1, Günter Klambauer 1 Martin Heusel 1 Hubert Ramsauer 1 Sepp Hochreiter 1 arxiv:1708.08819v3
More informationIN the context of deep neural networks, the distribution
Training Faster by Separating Modes of Variation in Batch-normalized Models Mahdi M. Kalayeh, Member, IEEE, and Mubarak Shah, Fellow, IEEE 1 arxiv:1806.02892v1 [cs.cv] 7 Jun 2018 Abstract Batch Normalization
More informationtopics about f-divergence
topics about f-divergence Presented by Liqun Chen Mar 16th, 2018 1 Outline 1 f-gan: Training Generative Neural Samplers using Variational Experiments 2 f-gans in an Information Geometric Nutshell Experiments
More informationDo you like to be successful? Able to see the big picture
Do you like to be successful? Able to see the big picture 1 Are you able to recognise a scientific GEM 2 How to recognise good work? suggestions please item#1 1st of its kind item#2 solve problem item#3
More informationThe Numerics of GANs
The Numerics of GANs Lars Mescheder 1, Sebastian Nowozin 2, Andreas Geiger 1 1 Autonomous Vision Group, MPI Tübingen 2 Machine Intelligence and Perception Group, Microsoft Research Presented by: Dinghan
More informationA Unified View of Deep Generative Models
SAILING LAB Laboratory for Statistical Artificial InteLigence & INtegreative Genomics A Unified View of Deep Generative Models Zhiting Hu and Eric Xing Petuum Inc. Carnegie Mellon University 1 Deep generative
More informationSOBOLEV GAN ABSTRACT 1 INTRODUCTION. Published as a conference paper at ICLR 2018
Published as a conference paper at ICLR 8 SOBOLEV GAN Youssef Mroueh, Chun-Liang Li,?, Tom Sercu,?, Anant Raj,? & Yu Cheng IBM Research AI Carnegie Mellon University Max Planck Institute for Intelligent
More informationMachine Learning Support Vector Machines. Prof. Matteo Matteucci
Machine Learning Support Vector Machines Prof. Matteo Matteucci Discriminative vs. Generative Approaches 2 o Generative approach: we derived the classifier from some generative hypothesis about the way
More informationIs Robustness the Cost of Accuracy? A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
Is Robustness the Cost of Accuracy? A Comprehensive Study on the Robustness of 18 Deep Image Classification Models Dong Su 1*, Huan Zhang 2*, Hongge Chen 3, Jinfeng Yi 4, Pin-Yu Chen 1, and Yupeng Gao
More informationDeep Generative Models. (Unsupervised Learning)
Deep Generative Models (Unsupervised Learning) CEng 783 Deep Learning Fall 2017 Emre Akbaş Reminders Next week: project progress demos in class Describe your problem/goal What you have done so far What
More informationVariational Inference via Stochastic Backpropagation
Variational Inference via Stochastic Backpropagation Kai Fan February 27, 2016 Preliminaries Stochastic Backpropagation Variational Auto-Encoding Related Work Summary Outline Preliminaries Stochastic Backpropagation
More informationImproved Training of Wasserstein GANs
Improved Training of Wasserstein GANs Ishaan Gulrajani 1, Faruk Ahmed 1, Martin Arjovsky 2, Vincent Dumoulin 1, Aaron Courville 1,3 1 Montreal Institute for Learning Algorithms 2 Courant Institute of Mathematical
More informationLearning Deep Architectures for AI. Part II - Vijay Chakilam
Learning Deep Architectures for AI - Yoshua Bengio Part II - Vijay Chakilam Limitations of Perceptron x1 W, b 0,1 1,1 y x2 weight plane output =1 output =0 There is no value for W and b such that the model
More informationDeep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang
Deep Learning Basics Lecture 7: Factor Analysis Princeton University COS 495 Instructor: Yingyu Liang Supervised v.s. Unsupervised Math formulation for supervised learning Given training data x i, y i
More informationECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction
ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering
More informationGradient descent GAN optimization is locally stable
Gradient descent GAN optimization is locally stable Advances in Neural Information Processing Systems, 2017 Vaishnavh Nagarajan J. Zico Kolter Carnegie Mellon University 05 January 2018 Presented by: Kevin
More informationarxiv: v3 [cs.lg] 25 Dec 2017
Improved Training of Wasserstein GANs arxiv:1704.00028v3 [cs.lg] 25 Dec 2017 Ishaan Gulrajani 1, Faruk Ahmed 1, Martin Arjovsky 2, Vincent Dumoulin 1, Aaron Courville 1,3 1 Montreal Institute for Learning
More informationDeep Residual. Variations
Deep Residual Network and Its Variations Diyu Yang (Originally prepared by Kaiming He from Microsoft Research) Advantages of Depth Degradation Problem Possible Causes? Vanishing/Exploding Gradients. Overfitting
More informationEXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING
EXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING DATE AND TIME: June 9, 2018, 09.00 14.00 RESPONSIBLE TEACHER: Andreas Svensson NUMBER OF PROBLEMS: 5 AIDING MATERIAL: Calculator, mathematical
More informationarxiv: v2 [cs.lg] 14 Sep 2017
CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training Murat Kocaoglu 1,a, Christopher Snyder 1,b, Alexandros G. Dimakis 1,c and Sriram Vishwanath 1,d arxiv:1709.02023v2 [cs.lg]
More informationComputing with Distributed Distributional Codes Convergent Inference in Brains and Machines?
Computing with Distributed Distributional Codes Convergent Inference in Brains and Machines? Maneesh Sahani Professor of Theoretical Neuroscience and Machine Learning Gatsby Computational Neuroscience
More informationMixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate
Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means
More informationThe Success of Deep Generative Models
The Success of Deep Generative Models Jakub Tomczak AMLAB, University of Amsterdam CERN, 2018 What is AI about? What is AI about? Decision making: What is AI about? Decision making: new data High probability
More informationSupplementary Materials for: f-gan: Training Generative Neural Samplers using Variational Divergence Minimization
Supplementary Materials for: f-gan: Training Generative Neural Samplers using Variational Divergence Minimization Sebastian Nowozin, Botond Cseke, Ryota Tomioka Machine Intelligence and Perception Group
More informationMixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate
Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means
More informationDeep Generative Models
Deep Generative Models Durk Kingma Max Welling Deep Probabilistic Models Worksop Wednesday, 1st of Oct, 2014 D.P. Kingma Deep generative models Transformations between Bayes nets and Neural nets Transformation
More informationGenerative Adversarial Networks, and Applications
Generative Adversarial Networks, and Applications Ali Mirzaei Nimish Srivastava Kwonjoon Lee Songting Xu CSE 252C 4/12/17 2/44 Outline: Generative Models vs Discriminative Models (Background) Generative
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Nov 2, 2016 Outline SGD-typed algorithms for Deep Learning Parallel SGD for deep learning Perceptron Prediction value for a training data: prediction
More informationMark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.
CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.
More informationSTA 414/2104: Lecture 8
STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA
More informationGenerative Adversarial Networks
Generative Adversarial Networks SIBGRAPI 2017 Tutorial Everything you wanted to know about Deep Learning for Computer Vision but were afraid to ask Presentation content inspired by Ian Goodfellow s tutorial
More informationCSCI-567: Machine Learning (Spring 2019)
CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March
More informationLearning features to compare distributions
Learning features to compare distributions Arthur Gretton Gatsby Computational Neuroscience Unit, University College London NIPS 2016 Workshop on Adversarial Learning, Barcelona Spain 1/28 Goal of this
More informationComputer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression
Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.
More informationOPTIMIZATION METHODS IN DEEP LEARNING
Tutorial outline OPTIMIZATION METHODS IN DEEP LEARNING Based on Deep Learning, chapter 8 by Ian Goodfellow, Yoshua Bengio and Aaron Courville Presented By Nadav Bhonker Optimization vs Learning Surrogate
More informationUNSUPERVISED LEARNING
UNSUPERVISED LEARNING Topics Layer-wise (unsupervised) pre-training Restricted Boltzmann Machines Auto-encoders LAYER-WISE (UNSUPERVISED) PRE-TRAINING Breakthrough in 2006 Layer-wise (unsupervised) pre-training
More informationarxiv: v1 [cs.it] 26 Oct 2018
Outlier Detection using Generative Models with Theoretical Performance Guarantees arxiv:1810.11335v1 [cs.it] 6 Oct 018 Jirong Yi Anh Duc Le Tianming Wang Xiaodong Wu Weiyu Xu October 9, 018 Abstract This
More informationAdaGAN: Boosting Generative Models
AdaGAN: Boosting Generative Models Ilya Tolstikhin ilya@tuebingen.mpg.de joint work with Gelly 2, Bousquet 2, Simon-Gabriel 1, Schölkopf 1 1 MPI for Intelligent Systems 2 Google Brain Radford et al., 2015)
More informationDistribution-Free Distribution Regression
Distribution-Free Distribution Regression Barnabás Póczos, Alessandro Rinaldo, Aarti Singh and Larry Wasserman AISTATS 2013 Presented by Esther Salazar Duke University February 28, 2014 E. Salazar (Reading
More informationConvex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization
Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f
More informationCONTINUOUS-TIME FLOWS FOR EFFICIENT INFER-
CONTINUOUS-TIME FLOWS FOR EFFICIENT INFER- ENCE AND DENSITY ESTIMATION Anonymous authors Paper under double-blind review ABSTRACT Two fundamental problems in unsupervised learning are efficient inference
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationCS229T/STATS231: Statistical Learning Theory. Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018
CS229T/STATS231: Statistical Learning Theory Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018 1 Overview This lecture mainly covers Recall the statistical theory of GANs
More informationIan Goodfellow, Staff Research Scientist, Google Brain. Seminar at CERN Geneva,
MedGAN ID-CGAN CoGAN LR-GAN CGAN IcGAN b-gan LS-GAN AffGAN LAPGAN DiscoGANMPM-GAN AdaGAN LSGAN InfoGAN CatGAN AMGAN igan IAN Open Challenges for Improving GANs McGAN Ian Goodfellow, Staff Research Scientist,
More informationFirst Order Generative Adversarial Networks
Calvin Seward 1 2 Thomas Unterthiner 2 Urs Bergmann 1 Nikolay Jetchev 1 Sepp Hochreiter 2 Abstract GANs excel at learning high dimensional distributions, but they can update generator parameters in directions
More informationEnforcing constraints for interpolation and extrapolation in Generative Adversarial Networks
Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks Panos Stinis (joint work with T. Hagge, A.M. Tartakovsky and E. Yeung) Pacific Northwest National Laboratory
More informationDeep Feedforward Networks
Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3
More informationPrimal-dual Subgradient Method for Convex Problems with Functional Constraints
Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual
More informationPractical Bayesian Optimization of Machine Learning. Learning Algorithms
Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that
More informationOnline Manifold Regularization: A New Learning Setting and Empirical Study
Online Manifold Regularization: A New Learning Setting and Empirical Study Andrew B. Goldberg 1, Ming Li 2, Xiaojin Zhu 1 1 Computer Sciences, University of Wisconsin Madison, USA. {goldberg,jerryzhu}@cs.wisc.edu
More informationStochastic Variational Inference for Gaussian Process Latent Variable Models using Back Constraints
Stochastic Variational Inference for Gaussian Process Latent Variable Models using Back Constraints Thang D. Bui Richard E. Turner tdb40@cam.ac.uk ret26@cam.ac.uk Computational and Biological Learning
More informationSUPPORT VECTOR MACHINE
SUPPORT VECTOR MACHINE Mainly based on https://nlp.stanford.edu/ir-book/pdf/15svm.pdf 1 Overview SVM is a huge topic Integration of MMDS, IIR, and Andrew Moore s slides here Our foci: Geometric intuition
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationEXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING
EXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING DATE AND TIME: August 30, 2018, 14.00 19.00 RESPONSIBLE TEACHER: Niklas Wahlström NUMBER OF PROBLEMS: 5 AIDING MATERIAL: Calculator, mathematical
More information