Wasserstein GAN. Juho Lee. Jan 23, 2017
|
|
- Everett Garrett
- 6 years ago
- Views:
Transcription
1 Wasserstein GAN Juho Lee Jan 23, 2017
2 Wasserstein GAN (WGAN) Arxiv submission Martin Arjovsky, Soumith Chintala, and Léon Bottou A new GAN model minimizing the Earth-Mover s distance (Wasserstein-1 distance) Stabilized GAN training with way less mode collapse Provide meaningful learning curves useful for debegging
3 Towards principled methods for training generative adversarial networks ICLR 2017 (oral) Martin Arjovsky and Léon Bottou Why do updates gets worse as the discriminator gets better? Why is GAN training massively unstable? The impact of log D(G(z)) trick; is it following the JSD?
4 Learning probability distribution Given a set of observations {x i } n i=1, assume a model distribution P θ of parametric family. Select a distance measure between the model distribution and real distribution P r ; ρ(p θ, P r ). Convergence: as t, θ t θ, so P θt P θ where ρ(p r, P θ ) 0. Desirable conditions: the mapping θ ρ(p r, P θ ) is continuous.
5 Distances between probability distributions I Let (X, Σ) be measurable space, where X is a compact metric set and Σ is a Borel σ-algebra. The Total Variation (TV) distance δ(p r, P θ ) = sup P r (A) P θ (A). A Σ The Kullback-Leibler (KL) divergence KL(P r P θ ) = log P r(x) P θ (x) P r(x)dµ(x), where both P r and P θ are assumed to be absolutely continuous, and therefore admit densities, w.r.t. a same measure µ on X.
6 Distances between probability distributions II The Jensen-Shannon (JS) divergence JS(P r, P θ ) = 1 2 KL(P r P m ) KL(P θ P m ), where P m := (P r + P θ )/2. The Earth-Mover s (EM) distance or Wasserstein-1 distance W (P r, P θ ) = inf E (x,y) γ[ x y ], γ Π(P r,p θ ) where Π(P r, P θ ) denotes the set of all joint distributions γ(x, y) whose marginals are respectively P r and P θ.
7 Distances between probability distributions III Z Unif([0, 1]) (0, Z) (θ, Z) { if θ 0 KL(P θ P 0 ) = 0 if θ = 0. { log 2 if θ 0 JS(P 0, P θ ) = 0 if θ = 0. { 1 if θ 0 δ(p 0, P θ ) = 0 if θ = 0. W (P 0, P θ ) = θ.
8 Instability of GAN I Original objective function: L(D, g θ ) = E x Pr [log D(x)] + E x Pg [log(1 D(x))]. The optimal discriminator is D (x) = P r (x) P r (x) + P g (x), and L(D, g θ ) = 2JS(P r, P g ) 2 log 2.
9 Instability of GAN II Theorem 1 Let P r and P G be two distributions that have support contained in two closed manifolds M and P that don t perfectly align and don t have full dimensions. We further assume that P r and P g are continuous in their respective manifolds, meaning that if there is a set A with measure 0 in M, then P r (A) = 0 (and analogously for P g ). Then, there exists an optimal discriminator D : X [0, 1] that has accuracy 1 and for almost any x in M P, D is smooth in a neighbourhood of x and x D (x) = 0.
10 Instability of GAN III Theorem 2 (Vanishing gradients on the generator) Let g θ : Z X be a differentiable function that induces a distribution P g. If some conditions are satisfied, and D D < ɛ, and E z p(z) [ J θ g θ (z) 2 2] M 2, θ E z p(z) [log(1 D(g θ (z)))] 2 < M ɛ 1 ɛ.
11 Instability of GAN IV
12 Instability of GAN V
13 The log D trick I For generator, instead of minimizing E z p(z) [log(1 D(g θ (z))], minimize E z p(z) [log(d(g θ (z))]. This does not change the fixed points. Theorem 3 Let D = Pr P r+p g be the optimal discriminator for a fixed θ = θ 0. E z p(z) [ θ log D (g θ (z)) θ=θ0 ] = θ [KL(P gθ0 P r ) 2JS(P gθ0, P r )] θ=θ0.
14 The log D trick II Theorem 4 (Under some conditions) E z p(z) [ θ log D(g θ (z))] is a centered Cauchy distribution with infinite expectation and variance.
15 Why should we use Wasserstein distance I Theorem 5 Let P r be a fixed distribution over X. Let Z be a random variable over another space Z. Let g : Z R d X be a function, that will be denoted g θ (z). Let P θ denote the distribution of g θ (Z). Then, 1. If g is continuous in θ, so is W (P r, P θ ). 2. If g is locally Lipschitz and satisfies regularity assumption 1, then W (P r, P θ ) is continuous everywhere, and differentiable almost everywhere and 2 are false for the Jensen-Shannon and KL divergences. If we choose g θ to be any feedforward neural network parametrized by θ, and p(z) to be E[ z ] <, then the regularity assumption 1 is satisfied.
16 Why should we use Wasserstein distance II Theorem 6 Let P be a distribution on a compact space X and (P n ) n N be a sequence of distributions on X. Then, as n, 1. δ(p n, P) 0 JS(P n, P) W (P n, P) 0 P n D P. 3. KL(P n P) 0 or KL(P P n ) 0 implies implies 2.
17 Why should we use Wasserstein distance III
18 Approximating the Earth-Mover s distance By the Kantorovich-Rubinstein duelity [1] W (P r, P θ ) = sup E x Pr [f(x)] E x Pθ [f(x)], f L 1 where the supremum is over all the 1-Lipschitz functions f : X R. 1-Lipschitz can be replaced by K-Lipschitz. Theorem 7 Let P r be any distribution, and let P θ be the distribution of g θ (Z) satisfying assumption 1. Then, there exists a solution f : X R to the problem max f L 1 E x P r [f(x)] E x Pθ [f(x)] and we have θ W (P r, P θ ) = E z p(z) [ θ f(g θ (z))], when both terms are well-defined.
19 WGAN algorithm
20 Experiments I
21 Experiments II
22 C. Villani. Optimal Transport: Old and New. Grundlehren der mathematischen Wissenschaften. Springer, Berlin, 2009.
Nishant Gurnani. GAN Reading Group. April 14th, / 107
Nishant Gurnani GAN Reading Group April 14th, 2017 1 / 107 Why are these Papers Important? 2 / 107 Why are these Papers Important? Recently a large number of GAN frameworks have been proposed - BGAN, LSGAN,
More informationTheory and Applications of Wasserstein Distance. Yunsheng Bai Mar 2, 2018
Theory and Applications of Wasserstein Distance Yunsheng Bai Mar 2, 2018 Roadmap 1. 2. 3. Why Study Wasserstein Distance? Elementary Distances Between Two Distributions Applications of Wasserstein Distance
More informationCS229T/STATS231: Statistical Learning Theory. Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018
CS229T/STATS231: Statistical Learning Theory Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018 1 Overview This lecture mainly covers Recall the statistical theory of GANs
More informationWasserstein Generative Adversarial Networks
Martin Arjovsky 1 Soumith Chintala 2 Léon Bottou 1 2 Abstract We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability
More informationLecture 14: Deep Generative Learning
Generative Modeling CSED703R: Deep Learning for Visual Recognition (2017F) Lecture 14: Deep Generative Learning Density estimation Reconstructing probability density function using samples Bohyung Han
More informationAdaGAN: Boosting Generative Models
AdaGAN: Boosting Generative Models Ilya Tolstikhin ilya@tuebingen.mpg.de joint work with Gelly 2, Bousquet 2, Simon-Gabriel 1, Schölkopf 1 1 MPI for Intelligent Systems 2 Google Brain Radford et al., 2015)
More informationarxiv: v3 [stat.ml] 6 Dec 2017
Wasserstein GAN arxiv:1701.07875v3 [stat.ml] 6 Dec 2017 Martin Arjovsky 1, Soumith Chintala 2, and Léon Bottou 1,2 1 Introduction 1 Courant Institute of Mathematical Sciences 2 Facebook AI Research The
More informationTOWARDS PRINCIPLED METHODS FOR TRAINING GENERATIVE ADVERSARIAL NETWORKS
TOWARDS PRINCIPLED METHODS FOR TRAINING GENERATIVE ADVERSARIAL NETWORKS Martin Arjovsky Courant Institute of Mathematical Sciences martinarjovsky@gmail.com Léon Bottou Facebook AI Research leonb@fb.com
More informationarxiv: v1 [cs.lg] 20 Apr 2017
Softmax GAN Min Lin Qihoo 360 Technology co. ltd Beijing, China, 0087 mavenlin@gmail.com arxiv:704.069v [cs.lg] 0 Apr 07 Abstract Softmax GAN is a novel variant of Generative Adversarial Network (GAN).
More informationGradient descent GAN optimization is locally stable
Gradient descent GAN optimization is locally stable Advances in Neural Information Processing Systems, 2017 Vaishnavh Nagarajan J. Zico Kolter Carnegie Mellon University 05 January 2018 Presented by: Kevin
More informationTOWARDS PRINCIPLED METHODS FOR TRAINING GENERATIVE ADVERSARIAL NETWORKS
TOWARDS PRINCIPLED METHODS FOR TRAINING GENERATIVE ADVERSARIAL NETWORKS Martin Arjovsky Courant Institute of Mathematical Sciences martinarjovsky@gmail.com Léon Bottou Facebook AI Research leonb@fb.com
More informationGENERATIVE ADVERSARIAL LEARNING
GENERATIVE ADVERSARIAL LEARNING OF MARKOV CHAINS Jiaming Song, Shengjia Zhao & Stefano Ermon Computer Science Department Stanford University {tsong,zhaosj12,ermon}@cs.stanford.edu ABSTRACT We investigate
More informationInformation theoretic perspectives on learning algorithms
Information theoretic perspectives on learning algorithms Varun Jog University of Wisconsin - Madison Departments of ECE and Mathematics Shannon Channel Hangout! May 8, 2018 Jointly with Adrian Tovar-Lopez
More informationtopics about f-divergence
topics about f-divergence Presented by Liqun Chen Mar 16th, 2018 1 Outline 1 f-gan: Training Generative Neural Samplers using Variational Experiments 2 f-gans in an Information Geometric Nutshell Experiments
More informationGenerative Adversarial Networks. Presented by Yi Zhang
Generative Adversarial Networks Presented by Yi Zhang Deep Generative Models N(O, I) Variational Auto-Encoders GANs Unreasonable Effectiveness of GANs GANs Discriminator tries to distinguish genuine data
More informationSome theoretical properties of GANs. Gérard Biau Toulouse, September 2018
Some theoretical properties of GANs Gérard Biau Toulouse, September 2018 Coauthors Benoît Cadre (ENS Rennes) Maxime Sangnier (Sorbonne University) Ugo Tanielian (Sorbonne University & Criteo) 1 video Source:
More informationCAUSAL GAN: LEARNING CAUSAL IMPLICIT GENERATIVE MODELS WITH ADVERSARIAL TRAINING
CAUSAL GAN: LEARNING CAUSAL IMPLICIT GENERATIVE MODELS WITH ADVERSARIAL TRAINING (Murat Kocaoglu, Christopher Snyder, Alexandros G. Dimakis & Sriram Vishwanath, 2017) Summer Term 2018 Created for the Seminar
More informationEnergy-Based Generative Adversarial Network
Energy-Based Generative Adversarial Network Energy-Based Generative Adversarial Network J. Zhao, M. Mathieu and Y. LeCun Learning to Draw Samples: With Application to Amoritized MLE for Generalized Adversarial
More informationTraining Generative Adversarial Networks Via Turing Test
raining enerative Adversarial Networks Via uring est Jianlin Su School of Mathematics Sun Yat-sen University uangdong, China bojone@spaces.ac.cn Abstract In this article, we introduce a new mode for training
More informationLecture 35: December The fundamental statistical distances
36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose
More informationGenerative Adversarial Networks, and Applications
Generative Adversarial Networks, and Applications Ali Mirzaei Nimish Srivastava Kwonjoon Lee Songting Xu CSE 252C 4/12/17 2/44 Outline: Generative Models vs Discriminative Models (Background) Generative
More informationChapter 20. Deep Generative Models
Peng et al.: Deep Learning and Practice 1 Chapter 20 Deep Generative Models Peng et al.: Deep Learning and Practice 2 Generative Models Models that are able to Provide an estimate of the probability distribution
More informationarxiv: v2 [cs.lg] 21 Aug 2018
CoT: Cooperative Training for Generative Modeling of Discrete Data arxiv:1804.03782v2 [cs.lg] 21 Aug 2018 Sidi Lu Shanghai Jiao Tong University steve_lu@apex.sjtu.edu.cn Weinan Zhang Shanghai Jiao Tong
More informationA Unified View of Deep Generative Models
SAILING LAB Laboratory for Statistical Artificial InteLigence & INtegreative Genomics A Unified View of Deep Generative Models Zhiting Hu and Eric Xing Petuum Inc. Carnegie Mellon University 1 Deep generative
More informationWUCHEN LI AND STANLEY OSHER
CONSTRAINED DYNAMICAL OPTIMAL TRANSPORT AND ITS LAGRANGIAN FORMULATION WUCHEN LI AND STANLEY OSHER Abstract. We propose ynamical optimal transport (OT) problems constraine in a parameterize probability
More informationInformation geometry for bivariate distribution control
Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic
More informationarxiv: v7 [cs.lg] 27 Jul 2018
How Generative Adversarial Networks and Their Variants Work: An Overview of GAN Yongjun Hong, Uiwon Hwang, Jaeyoon Yoo and Sungroh Yoon Department of Electrical & Computer Engineering Seoul National University,
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More informationDo you like to be successful? Able to see the big picture
Do you like to be successful? Able to see the big picture 1 Are you able to recognise a scientific GEM 2 How to recognise good work? suggestions please item#1 1st of its kind item#2 solve problem item#3
More informationMMD GAN 1 Fisher GAN 2
MMD GAN 1 Fisher GAN 1 Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos (CMU, IBM Research) Youssef Mroueh, and Tom Sercu (IBM Research) Presented by Rui-Yi(Roy) Zhang Decemeber
More informationVariational Autoencoders (VAEs)
September 26 & October 3, 2017 Section 1 Preliminaries Kullback-Leibler divergence KL divergence (continuous case) p(x) andq(x) are two density distributions. Then the KL-divergence is defined as Z KL(p
More informationGenerative adversarial networks
14-1: Generative adversarial networks Prof. J.C. Kao, UCLA Generative adversarial networks Why GANs? GAN intuition GAN equilibrium GAN implementation Practical considerations Much of these notes are based
More informationarxiv: v3 [stat.ml] 20 Feb 2018
MANY PATHS TO EQUILIBRIUM: GANS DO NOT NEED TO DECREASE A DIVERGENCE AT EVERY STEP William Fedus 1, Mihaela Rosca 2, Balaji Lakshminarayanan 2, Andrew M. Dai 1, Shakir Mohamed 2 and Ian Goodfellow 1 1
More informationDistance-Divergence Inequalities
Distance-Divergence Inequalities Katalin Marton Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences Motivation To find a simple proof of the Blowing-up Lemma, proved by Ahlswede,
More informationSinging Voice Separation using Generative Adversarial Networks
Singing Voice Separation using Generative Adversarial Networks Hyeong-seok Choi, Kyogu Lee Music and Audio Research Group Graduate School of Convergence Science and Technology Seoul National University
More informationWasserstein Training of Boltzmann Machines
Wasserstein Training of Boltzmann Machines Grégoire Montavon, Klaus-Rober Muller, Marco Cuturi Presenter: Shiyu Liang December 1, 2016 Coordinated Science Laboratory Department of Electrical and Computer
More informationSupplementary Materials for: f-gan: Training Generative Neural Samplers using Variational Divergence Minimization
Supplementary Materials for: f-gan: Training Generative Neural Samplers using Variational Divergence Minimization Sebastian Nowozin, Botond Cseke, Ryota Tomioka Machine Intelligence and Perception Group
More informationGenerative Adversarial Networks
Generative Adversarial Networks Stefano Ermon, Aditya Grover Stanford University Lecture 10 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 10 1 / 17 Selected GANs https://github.com/hindupuravinash/the-gan-zoo
More informationarxiv: v4 [cs.cv] 5 Sep 2018
Wasserstein Divergence for GANs Jiqing Wu 1, Zhiwu Huang 1, Janine Thoma 1, Dinesh Acharya 1, and Luc Van Gool 1,2 arxiv:1712.01026v4 [cs.cv] 5 Sep 2018 1 Computer Vision Lab, ETH Zurich, Switzerland {jwu,zhiwu.huang,jthoma,vangool}@vision.ee.ethz.ch,
More information3. If a choice is broken down into two successive choices, the original H should be the weighted sum of the individual values of H.
Appendix A Information Theory A.1 Entropy Shannon (Shanon, 1948) developed the concept of entropy to measure the uncertainty of a discrete random variable. Suppose X is a discrete random variable that
More informationIEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm
IEOR E4570: Machine Learning for OR&FE Spring 205 c 205 by Martin Haugh The EM Algorithm The EM algorithm is used for obtaining maximum likelihood estimates of parameters when some of the data is missing.
More informationarxiv: v3 [cs.lg] 2 Nov 2018
PacGAN: The power of two samples in generative adversarial networks Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh Carnegie Mellon University, University of Illinois at Urbana-Champaign arxiv:72.486v3
More informationLocal semiconvexity of Kantorovich potentials on non-compact manifolds
Local semiconvexity of Kantorovich potentials on non-compact manifolds Alessio Figalli, Nicola Gigli Abstract We prove that any Kantorovich potential for the cost function c = d / on a Riemannian manifold
More informationUnderstanding GANs: the LQG Setting
Understanding GANs: the LQG Setting Soheil Feizi 1, Changho Suh 2, Fei Xia 1 and David Tse 1 1 Stanford University 2 Korea Advanced Institute of Science and Technology arxiv:1710.10793v1 [stat.ml] 30 Oct
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality
More informationDeep Bayesian Inversion Computational uncertainty quantification for large scale inverse problems
Deep Bayesian Inversion Computational uncertainty quantification for large scale inverse problems arxiv:1811.05910v1 stat.ml] 14 Nov 018 Jonas Adler Department of Mathematics KTH - Royal institute of Technology
More informationGeometrical Insights for Implicit Generative Modeling
Geometrical Insights for Implicit Generative Modeling Leon Bottou a,b, Martin Arjovsky b, David Lopez-Paz a, Maxime Oquab a,c a Facebook AI Research, New York, Paris b New York University, New York c Inria,
More informationOptimal Transport in Risk Analysis
Optimal Transport in Risk Analysis Jose Blanchet (based on work with Y. Kang and K. Murthy) Stanford University (Management Science and Engineering), and Columbia University (Department of Statistics and
More informationEnforcing constraints for interpolation and extrapolation in Generative Adversarial Networks
Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks Panos Stinis (joint work with T. Hagge, A.M. Tartakovsky and E. Yeung) Pacific Northwest National Laboratory
More informationGenerative Adversarial Networks
Generative Adversarial Networks SIBGRAPI 2017 Tutorial Everything you wanted to know about Deep Learning for Computer Vision but were afraid to ask Presentation content inspired by Ian Goodfellow s tutorial
More informationarxiv: v4 [stat.ml] 16 Sep 2018
Relaxed Wasserstein with Applications to GANs in Guo Johnny Hong Tianyi Lin Nan Yang September 9, 2018 ariv:1705.07164v4 [stat.ml] 16 Sep 2018 Abstract We propose a novel class of statistical divergences
More informationarxiv: v3 [cs.lg] 11 Jun 2018
Lars Mescheder 1 Andreas Geiger 1 2 Sebastian Nowozin 3 arxiv:1801.04406v3 [cs.lg] 11 Jun 2018 Abstract Recent work has shown local convergence of GAN training for absolutely continuous data and generator
More informationAmbiguity Sets and their applications to SVM
Ambiguity Sets and their applications to SVM Ammon Washburn University of Arizona April 22, 2016 Ammon Washburn (University of Arizona) Ambiguity Sets April 22, 2016 1 / 25 Introduction Go over some (very
More informationB PROOF OF LEMMA 1. Published as a conference paper at ICLR 2018
A ADVERSARIAL DOMAIN ADAPTATION (ADA ADA aims to transfer prediction nowledge learned from a source domain with labeled data to a target domain without labels, by learning domain-invariant features. Let
More informationGANs, GANs everywhere
GANs, GANs everywhere particularly, in High Energy Physics Maxim Borisyak Yandex, NRU Higher School of Economics Generative Generative models Given samples of a random variable X find X such as: P X P
More informationOptimal Transport Methods in Operations Research and Statistics
Optimal Transport Methods in Operations Research and Statistics Jose Blanchet (based on work with F. He, Y. Kang, K. Murthy, F. Zhang). Stanford University (Management Science and Engineering), and Columbia
More informationSeries 7, May 22, 2018 (EM Convergence)
Exercises Introduction to Machine Learning SS 2018 Series 7, May 22, 2018 (EM Convergence) Institute for Machine Learning Dept. of Computer Science, ETH Zürich Prof. Dr. Andreas Krause Web: https://las.inf.ethz.ch/teaching/introml-s18
More information21.2 Example 1 : Non-parametric regression in Mean Integrated Square Error Density Estimation (L 2 2 risk)
10-704: Information Processing and Learning Spring 2015 Lecture 21: Examples of Lower Bounds and Assouad s Method Lecturer: Akshay Krishnamurthy Scribes: Soumya Batra Note: LaTeX template courtesy of UC
More informationarxiv: v1 [stat.ml] 21 Mar 2018
Some Theoretical Properties of GANs arxiv:1803.07819v1 [stat.ml] 21 Mar 2018 G. Biau Sorbonne Université, CNRS, LPSM Paris, France gerard.biau@upmc.fr M. Sangnier Sorbonne Université, CNRS, LPSM Paris,
More informationGenerative Models and Optimal Transport
Generative Models and Optimal Transport Marco Cuturi Joint work / work in progress with G. Peyré, A. Genevay (ENS), F. Bach (INRIA), G. Montavon, K-R Müller (TU Berlin) Statistics 0.1 : Density Fitting
More informationOptimal Transport and Wasserstein Distance
Optimal Transport and Wasserstein Distance The Wasserstein distance which arises from the idea of optimal transport is being used more and more in Statistics and Machine Learning. In these notes we review
More informationPreparatory Material for the European Intensive Program in Bydgoszcz 2011 Analytical and computer assisted methods in mathematical models
Preparatory Material for the European Intensive Program in Bydgoszcz 2011 Analytical and computer assisted methods in mathematical models September 4{18 Basics on the Lebesgue integral and the divergence
More informationPosterior Regularization
Posterior Regularization 1 Introduction One of the key challenges in probabilistic structured learning, is the intractability of the posterior distribution, for fast inference. There are numerous methods
More informationWhich Training Methods for GANs do actually Converge?
Lars Mescheder 1 Andreas Geiger 1 2 Sebastian Nowozin 3 Abstract Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show
More informationContinuity of convolution and SIN groups
Continuity of convolution and SIN groups Jan Pachl and Juris Steprāns June 5, 2016 Abstract Let the measure algebra of a topological group G be equipped with the topology of uniform convergence on bounded
More informationTop Tagging with Lorentz Boost Networks and Simulation of Electromagnetic Showers with a Wasserstein GAN
Top Tagging with Lorentz Boost Networks and Simulation of Electromagnetic Showers with a Wasserstein GAN Y. Rath, M. Erdmann, B. Fischer, L. Geiger, E. Geiser, J.Glombitza, D. Noll, T. Quast, M. Rieger,
More informationL p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by
L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may
More informationData Mining Techniques
Data Mining Techniques CS 6220 - Section 2 - Spring 2017 Lecture 6 Jan-Willem van de Meent (credit: Yijun Zhao, Chris Bishop, Andrew Moore, Hastie et al.) Project Project Deadlines 3 Feb: Form teams of
More informationis a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.
Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable
More informationDistirbutional robustness, regularizing variance, and adversaries
Distirbutional robustness, regularizing variance, and adversaries John Duchi Based on joint work with Hongseok Namkoong and Aman Sinha Stanford University November 2017 Motivation We do not want machine-learned
More informationStability results for Logarithmic Sobolev inequality
Stability results for Logarithmic Sobolev inequality Daesung Kim (joint work with Emanuel Indrei) Department of Mathematics Purdue University September 20, 2017 Daesung Kim (Purdue) Stability for LSI Probability
More informationPolicy Gradient. U(θ) = E[ R(s t,a t );π θ ] = E[R(τ);π θ ] (1) 1 + e θ φ(s t) E[R(τ);π θ ] (3) = max. θ P(τ;θ)R(τ) (6) P(τ;θ) θ log P(τ;θ)R(τ) (9)
CS294-40 Learning for Robotics and Control Lecture 16-10/20/2008 Lecturer: Pieter Abbeel Policy Gradient Scribe: Jan Biermeyer 1 Recap Recall: H U() = E[ R(s t,a ;π ] = E[R();π ] (1) Here is a sample path
More informationHilbert s 13th Problem Great Theorem; Shame about the Algorithm. Bill Moran
Hilbert s 13th Problem Great Theorem; Shame about the Algorithm Bill Moran Structure of Talk Solving Polynomial Equations Hilbert s 13th Problem Kolmogorov-Arnold Theorem Neural Networks Quadratic Equations
More informationGenerative adversarial networks and convolutional neural networks based weather classification model for day ahead short-term photovoltaic power
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Generative adversarial networks and convolutional neural networks
More informationVariational Autoencoder
Variational Autoencoder Göker Erdo gan August 8, 2017 The variational autoencoder (VA) [1] is a nonlinear latent variable model with an efficient gradient-based training procedure based on variational
More informationA QUANTITATIVE MEASURE OF GENERATIVE ADVERSARIAL NETWORK DISTRIBUTIONS
A QUANTITATIVE MEASURE OF GENERATIVE ADVERSARIAL NETWORK DISTRIBUTIONS Dan Hendrycks University of Chicago dan@ttic.edu Steven Basart University of Chicago xksteven@uchicago.edu ABSTRACT We introduce a
More informationProblem Set 5: Solutions Math 201A: Fall 2016
Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict
More information6.1 Variational representation of f-divergences
ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 6: Variational representation, HCR and CR lower bounds Lecturer: Yihong Wu Scribe: Georgios Rovatsos, Feb 11, 2016
More informationThe power of two samples in generative adversarial networks
The power of two samples in generative adversarial networks Sewoong Oh Department of Industrial and Enterprise Systems Engineering University of Illinois at Urbana-Champaign / 2 Generative models learn
More informationExpectation Maximization
Expectation Maximization Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr 1 /
More informationMachine Learning. Lecture 02.2: Basics of Information Theory. Nevin L. Zhang
Machine Learning Lecture 02.2: Basics of Information Theory Nevin L. Zhang lzhang@cse.ust.hk Department of Computer Science and Engineering The Hong Kong University of Science and Technology Nevin L. Zhang
More informationLatent Variable Models
Latent Variable Models Stefano Ermon, Aditya Grover Stanford University Lecture 5 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 5 1 / 31 Recap of last lecture 1 Autoregressive models:
More informationStatistical Machine Learning Lectures 4: Variational Bayes
1 / 29 Statistical Machine Learning Lectures 4: Variational Bayes Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 29 Synonyms Variational Bayes Variational Inference Variational Bayesian Inference
More informationd(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N
Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x
More informationON THE LIMITATIONS OF FIRST-ORDER APPROXIMATION IN GAN DYNAMICS
ON THE LIMITATIONS OF FIRST-ORDER APPROXIMATION IN GAN DYNAMICS Anonymous authors Paper under double-blind review ABSTRACT Generative Adversarial Networks (GANs) have been proposed as an approach to learning
More informationTransportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks
Transportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks Sho Sonoda School of Advanced Science and Engineering Waseda University sho.sonoda@aoni.waseda.jp Noboru
More informationTheory of Probability Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 8.75 Theory of Probability Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Section Prekopa-Leindler inequality,
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 2. Statistical Schools Adapted from slides by Ben Rubinstein Statistical Schools of Thought Remainder of lecture is to provide
More informationExpectation Propagation for Approximate Bayesian Inference
Expectation Propagation for Approximate Bayesian Inference José Miguel Hernández Lobato Universidad Autónoma de Madrid, Computer Science Department February 5, 2007 1/ 24 Bayesian Inference Inference Given
More informationAuto-Encoding Variational Bayes
Auto-Encoding Variational Bayes Diederik P Kingma, Max Welling June 18, 2018 Diederik P Kingma, Max Welling Auto-Encoding Variational Bayes June 18, 2018 1 / 39 Outline 1 Introduction 2 Variational Lower
More informationECE598: Information-theoretic methods in high-dimensional statistics Spring 2016
ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma
More informationarxiv: v2 [cs.lg] 13 Feb 2018
TRAINING GANS WITH OPTIMISM Constantinos Daskalakis MIT, EECS costis@mit.edu Andrew Ilyas MIT, EECS ailyas@mit.edu Vasilis Syrgkanis Microsoft Research vasy@microsoft.com Haoyang Zeng MIT, EECS haoyangz@mit.edu
More informationf-gan: Training Generative Neural Samplers using Variational Divergence Minimization
f-gan: Training Generative Neural Samplers using Variational Divergence Minimization Sebastian Nowozin, Botond Cseke, Ryota Tomioka Machine Intelligence and Perception Group Microsoft Research {Sebastian.Nowozin,
More informationAN ELEMENTARY PROOF OF THE TRIANGLE INEQUALITY FOR THE WASSERSTEIN METRIC
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 136, Number 1, January 2008, Pages 333 339 S 0002-9939(07)09020- Article electronically published on September 27, 2007 AN ELEMENTARY PROOF OF THE
More informationarxiv: v1 [stat.ml] 27 May 2018
Robust Hypothesis Testing Using Wasserstein Uncertainty Sets arxiv:1805.10611v1 stat.ml] 27 May 2018 Rui Gao School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332
More informationGenerative Modeling using the Sliced Wasserstein Distance
Generative Modeling using the Sliced Wasserstein Distance Ishan Deshpande University of Illinois Urbana-Champaign ideshpa@illinois.edu Ziyu Zhang Snap Inc. Los Angeles zzhang3@snap.com Alexander Schwing
More informationRegularity for the optimal transportation problem with Euclidean distance squared cost on the embedded sphere
Regularity for the optimal transportation problem with Euclidean distance squared cost on the embedded sphere Jun Kitagawa and Micah Warren January 6, 011 Abstract We give a sufficient condition on initial
More informationarxiv: v5 [cs.lg] 11 Jul 2018
On Unifying Deep Generative Models Zhiting Hu 1,2, Zichao Yang 1, Ruslan Salakhutdinov 1, Eric Xing 2 {zhitingh,zichaoy,rsalakhu}@cs.cmu.edu, eric.xing@petuum.com Carnegie Mellon University 1, Petuum Inc.
More informationarxiv: v3 [stat.ml] 12 Mar 2018
Wasserstein Auto-Encoders Ilya Tolstikhin 1, Olivier Bousquet 2, Sylvain Gelly 2, and Bernhard Schölkopf 1 1 Max Planck Institute for Intelligent Systems 2 Google Brain arxiv:1711.01558v3 [stat.ml] 12
More informationWhich Training Methods for GANs do actually Converge? Supplementary material
Supplementary material A. Preliminaries In this section we first summarize some results from the theory of discrete dynamical systems. We also prove a discrete version of a basic convergence theorem for
More information