Adversarially Robust Optimization and Generalization
|
|
- Victoria Meagan Watson
- 5 years ago
- Views:
Transcription
1 Adversarially Robust Optimization and Generalization Ludwig Schmidt MIT UC Berkeley Based on joint works with Logan ngstrom (MIT), Aleksander Madry (MIT), Aleksandar Makelov (MIT), Dimitris Tsipras (MIT), Kunal Talwar (Google), and Adrian Vladu (Boston University).
2 Recent Progress in ML
3 Recent Progress in ML Have we really achieved human-level performance?
4 Lack of Robustness Adversarial xamples [Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru rhan, Ian Goodfellow, Rob Fergus, 2014] [Athalye, ngstrom, Ilyas, Kwok, 2017]
5 Lack of Robustness Adversarial xamples [Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru rhan, Ian Goodfellow, Rob Fergus, 2014] [Athalye, ngstrom, Ilyas, Kwok, 2017] Translations + rotations (shifts by <10% pixels, <30 rotations) CIFAR10: 93% 8% accuracy ImageNet: 76% 31% accuracy [ngstrom, Tsipras, Schmidt, Madry, 2017]
6 Standard Generalization Adversarially Robust Generalization [ loss(x, )]
7 Standard Generalization Adversarially Robust Generalization [ loss(x, )] Adversarially Robust Generalization apple Perturbation set: rotations, translations, small perturbations, `1
8 Standard Generalization Adversarially Robust Generalization [ loss(x, )] Adversarially Robust Generalization apple Perturbation set: rotations, translations, small perturbations, `1 What is the right set of perturbations? This talk: assume the set P is given.
9 Long History
10 Why This Guarantee? apple
11 Why This Guarantee? apple If a classifier satisfies this property, we avoid arms races. JSMA Defensive Distillation Tuned JSMA [Papernot et al. 15], [Papernot et al. 16], [Carlini et al. 17] FGSM Feature Squeezing, nsembles Tuned Lagrange [Goodfellow et al. 15], [Abbasi et al. 17], [Xu et al. 17]; [He et al. 17]
12 How Can We Get There? apple
13 How Can We Get There? apple Standard image classifiers do not satisfy this property.
14 How Can We Get There? apple Standard image classifiers do not satisfy this property. How does robustness affect optimization and sample complexity?
15 Robust Optimization Main problem: min apple [Madry, Makelov, Schmidt, Tsipras, Vladu, 2017]
16 Robust Optimization Main problem: min apple [Madry, Makelov, Schmidt, Tsipras, Vladu, 2017]
17 Robust Optimization Main problem: apple min Convert to empirical risk: min nx i=1 x 0 2P (x i ) [Madry, Makelov, Schmidt, Tsipras, Vladu, 2017]
18 Robust Optimization Main problem: apple min Convert to empirical risk: min nx i=1 x 0 2P (x i ) min part: run SGD [Madry, Makelov, Schmidt, Tsipras, Vladu, 2017]
19 Robust Optimization Main problem: apple min Convert to empirical risk: min nx i=1 x 0 2P (x i ) min part: run SGD How do we get gradients for the inner max? [Madry, Makelov, Schmidt, Tsipras, Vladu, 2017]
20 Good Gradients = Good Attacks Danskin s Theorem Simplified, but holds for non-convex losses: Let x( ) = max loss(x 0, ) and let be a constrained maximizer of loss(, ). Then x r x ( ) = r loss(x, )
21 Good Gradients = Good Attacks Danskin s Theorem Simplified, but holds for non-convex losses: Let x( ) = max loss(x 0, ) and let be a constrained maximizer of loss(, ). Then x r x ( ) = r loss(x, ) Overall algorithm: adversarial training. Principled approach for min apple
22 Good Gradients = Good Attacks Danskin s Theorem Simplified, but holds for non-convex losses: Let x( ) = max loss(x 0, ) and let be a constrained maximizer of loss(, ). Then x r x ( ) = r loss(x, ) Overall algorithm: adversarial training. Principled approach for min Crucial point: need to find the best possible attack. apple
23 Is There Any Hope? Non-concave maximization problem. FGSM (single gradient) PGD (100 steps with η=0.3) Transfer FGSM Transfer PGD
24 Is There Any Hope? Non-concave maximization problem. FGSM (single gradient) PGD (100 steps with η=0.3) Transfer FGSM Transfer PGD xplains failure of FGSM
25 Loss Landscape xplore loss surface with randomly restarted PGD (100k trials): Many local maxima, but loss values concentrate.
26 Results: Robust Classifiers? Results MNIST (eps = 0.3): 90% accuracy vs white-box 93% accuracy vs black-box CIFAR10 (eps = 8): 46% accuracy vs white-box 63% accuracy vs black-box
27 Results: Robust Classifiers? Results MNIST (eps = 0.3): 90% accuracy vs white-box 93% accuracy vs black-box CIFAR10 (eps = 8): 46% accuracy vs white-box 63% accuracy vs black-box Public challenges since June (see github). Top black-box attacks 92.8% Generating Adversarial xamples with Adversarial Networks 93.5% PGD against three copies of the network (Florian Tramer)
28 What About CIFAR10?
29 What About CIFAR10? Optimization succeeds, but the model overfits on CIFAR10: 100% train adv. accuracy, but only 48% on test.
30 Robust Generalization Does robustness require more data? Theorem (informal): There is a distribution over points in R d with the following property: Learning a p classifier for this distribution requires d than learning a non-robust classifier. `1 robust linear more samples
31 Conclusions Robust generalization is a prerequisite for secure ML. Adversarial training (a.k.a. robust optimization) with strong enough attacks is a principled defense. Optimization is only half of the picture: We need to take care of adversarially robust generalization too
32 Questions What robustness guarantees should ML-based systems provide? Are there trade-offs between robust and standard generalization? What compromises in mathematical rigor are acceptable? How can we verify ML-based systems?
Towards ML You Can Rely On. Aleksander Mądry
Towards ML You Can Rely On Aleksander Mądry @aleks_madry madry-lab.ml Machine Learning: The Success Story? Image classification Reinforcement Learning Machine translation Machine Learning: The Success
More informationTOWARDS DEEP LEARNING MODELS RESISTANT TO ADVERSARIAL ATTACKS
Published as a conference paper at ICLR 8 TOWARDS DEEP LEARNING MODELS RESISTANT TO ADVERSARIAL ATTACKS Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu Department of
More informationReinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training Xi Wu * 1 Uyeong Jang * 2 Jiefeng Chen 2 Lingjiao Chen 2 Somesh Jha 2 Abstract In this paper we study leveraging
More informationarxiv: v1 [stat.ml] 27 Nov 2018
Robust Classification of Financial Risk arxiv:1811.11079v1 [stat.ml] 27 Nov 2018 Suproteem K. Sarkar suproteemsarkar@g.harvard.edu Daniel Giebisch * danielgiebisch@college.harvard.edu Abstract Kojin Oshiba
More informationReinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training Xi Wu * 1 Uyeong Jang * 2 Jiefeng Chen 2 Lingjiao Chen 2 Somesh Jha 2 Abstract In this paper we study leveraging
More informationarxiv: v1 [cs.lg] 6 Dec 2018
Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang Borealis AI arxiv:1812.02637v1
More informationUNIVERSALITY, ROBUSTNESS, AND DETECTABILITY
UNIVERSALITY, ROBUSTNESS, AND DETECTABILITY OF ADVERSARIAL PERTURBATIONS UNDER ADVER- SARIAL TRAINING Anonymous authors Paper under double-blind review ABSTRACT Classifiers such as deep neural networks
More informationarxiv: v1 [cs.lg] 15 Nov 2017 ABSTRACT
THE BEST DEFENSE IS A GOOD OFFENSE: COUNTERING BLACK BOX ATTACKS BY PREDICTING SLIGHTLY WRONG LABELS Yannic Kilcher Department of Computer Science ETH Zurich yannic.kilcher@inf.ethz.ch Thomas Hofmann Department
More informationENSEMBLE METHODS AS A DEFENSE TO ADVERSAR-
ENSEMBLE METHODS AS A DEFENSE TO ADVERSAR- IAL PERTURBATIONS AGAINST DEEP NEURAL NET- WORKS Anonymous authors Paper under double-blind review ABSTRACT Deep learning has become the state of the art approach
More informationSSCNets A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks
A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks Hammad Tariq*, Hassan Ali*, Muhammad Abdullah Hanif, Faiq Khalid, Semeen Rehman,
More informationADVERSARIAL SPHERES ABSTRACT 1 INTRODUCTION. Workshop track - ICLR 2018
ADVERSARIAL SPHERES Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S. Schoenholz, Maithra Raghu, Martin Wattenberg, & Ian Goodfellow Google Brain {gilmer,lmetz,schsam,maithra,wattenberg,goodfellow}@google.com
More informationarxiv: v3 [cs.cv] 10 Sep 2018
The Relationship Between High-Dimensional Geometry and Adversarial Examples arxiv:1801.02774v3 [cs.cv] 10 Sep 2018 Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S. Schoenholz, Maithra Raghu, Martin
More informationON THE SENSITIVITY OF ADVERSARIAL ROBUSTNESS
ON THE SENSITIVITY OF ADVERSARIAL ROBUSTNESS TO INPUT DATA DISTRIBUTIONS Anonymous authors Paper under double-blind review ABSTRACT Neural networks are vulnerable to small adversarial perturbations. Existing
More informationarxiv: v1 [cs.lg] 30 Oct 2018
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models arxiv:1810.12715v1 [cs.lg] 30 Oct 2018 Sven Gowal sgowal@google.com Rudy Bunel University of Oxford rudy@robots.ox.ac.uk
More informationQuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Hassan Ali *, Hammad Tariq *, Muhammad Abdullah Hanif, Faiq Khalid, Semeen Rehman, Rehan Ahmed * and Muhammad Shafique
More informationLimitations of the Lipschitz constant as a defense against adversarial examples
Limitations of the Lipschitz constant as a defense against adversarial examples Todd Huster, Cho-Yu Jason Chiang, and Ritu Chadha Perspecta Labs, Basking Ridge, NJ 07920, USA. thuster@perspectalabs.com
More informationECE 595: Machine Learning I Adversarial Attack 1
ECE 595: Machine Learning I Adversarial Attack 1 Spring 2019 Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 32 Outline Examples of Adversarial Attack Basic Terminology
More informationarxiv: v1 [cs.lg] 8 Nov 2018
A Geometric Perspective on the Transferability of Adversarial Directions Zachary Charles, Harrison Rosenberg, and Dimitris Papailiopoulos zcharles@wisc.edu hrosenberg@ece.wisc.edu dimitris@papail.io arxiv:1811.03531v1
More informationECE 595: Machine Learning I Adversarial Attack 1
ECE 595: Machine Learning I Adversarial Attack 1 Spring 2019 Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 32 Outline Examples of Adversarial Attack Basic Terminology
More informationarxiv: v2 [stat.ml] 20 Nov 2017
: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples Pin-Yu Chen1, Yash Sharma2, Huan Zhang3, Jinfeng Yi4, Cho-Jui Hsieh3 1 arxiv:1709.04114v2 [stat.ml] 20 Nov 2017 AI Foundations Lab,
More informationAdversarial Examples Generation and Defense Based on Generative Adversarial Network
Adversarial Examples Generation and Defense Based on Generative Adversarial Network Fei Xia (06082760), Ruishan Liu (06119690) December 15, 2016 1 Abstract We propose a novel generative adversarial network
More informationEAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples Pin-Yu Chen1, Yash Sharma2, Huan Zhang3, Jinfeng Yi4, Cho-Jui Hsieh3 1 AI Foundations Lab, IBM T. J. Watson Research Center, Yorktown
More informationMax-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training
Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang Borealis AI arxiv:1812.02637v2
More informationarxiv: v1 [cs.cv] 27 Nov 2018
Universal Adversarial Training arxiv:1811.11304v1 [cs.cv] 27 Nov 2018 Ali Shafahi ashafahi@cs.umd.edu Abstract Mahyar Najibi najibi@cs.umd.edu Larry S. Davis lsd@umiacs.umd.edu Standard adversarial attacks
More informationarxiv: v3 [cs.cv] 20 Dec 2018
Deep Defense: Training DNNs with Improved Adversarial Robustness arxiv:1803.00404v3 [cs.cv] 0 Dec 018 Ziang Yan 1* Yiwen Guo,1* Changshui Zhang 1 1 Institute for Artificial Intelligence, Tsinghua University
More informationarxiv: v1 [stat.ml] 15 Mar 2018
Large Margin Deep Networks for Classification arxiv:1803.05598v1 [stat.ml] 15 Mar 2018 Gamaleldin F. Elsayed Dilip Krishnan Hossein Mobahi Kevin Regan Samy Bengio Google Research {gamaleldin,dilipkay,hmobahi,kevinregan,bengio}@google.com
More informationarxiv: v1 [cs.cv] 20 Feb 2018
On Lyapunov exponents and adversarial perturbations Vinay Uday Prabhu UnifyID Inc San Francisco, 94107 vinay@unify.id Nishant Desai UnifyID Inc San Francisco, 94107 nishant@unify.id John Whaley UnifyID
More informationarxiv: v3 [cs.lg] 8 Jun 2018
Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope Eric Wong 1 J. Zico Kolter 2 arxiv:1711.00851v3 [cs.lg] 8 Jun 2018 Abstract We propose a method to learn deep ReLU-based
More informationMeasuring the Robustness of Neural Networks via Minimal Adversarial Examples
Measuring the Robustness of Neural Networks via Minimal Adversarial Examples Sumanth Dathathri sdathath@caltech.edu Stephan Zheng stephan@caltech.edu Sicun Gao sicung@ucsd.edu Richard M. Murray murray@cds.caltech.edu
More informationarxiv: v1 [cs.lg] 30 Jan 2019
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance Adi Shamir 1, Itay Safran 1, Eyal Ronen 2, and Orr Dunkelman 3 arxiv:1901.10861v1 [cs.lg] 30 Jan 2019 1 Computer
More informationarxiv: v1 [cs.lg] 24 Jan 2019
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples Kamil Nar Orhan Ocal S. Shankar Sastry Kannan Ramchandran arxiv:90.08360v [cs.lg] 24 Jan 209 Abstract State-of-the-art
More informationNotes on Adversarial Examples
Notes on Adversarial Examples David Meyer dmm@{1-4-5.net,uoregon.edu,...} March 14, 2017 1 Introduction The surprising discovery of adversarial examples by Szegedy et al. [6] has led to new ways of thinking
More informationarxiv: v3 [cs.lg] 22 Mar 2018
arxiv:1710.06081v3 [cs.lg] 22 Mar 2018 Boosting Adversarial Attacks with Momentum Yinpeng Dong1, Fangzhou Liao1, Tianyu Pang1, Hang Su1, Jun Zhu1, Xiaolin Hu1, Jianguo Li2 1 Department of Computer Science
More informationLarge Margin Deep Networks for Classification
Large Margin Deep Networks for Classification Gamaleldin F. Elsayed Google Research Dilip Krishnan Google Research Hossein Mobahi Google Research Kevin Regan Google Research Samy Bengio Google Research
More informationLEARNING WITH A STRONG ADVERSARY
LEARNING WITH A STRONG ADVERSARY Ruitong Huang, Bing Xu, Dale Schuurmans and Csaba Szepesvári Department of Computer Science University of Alberta Edmonton, AB T6G 2E8, Canada {ruitong,bx3,daes,szepesva}@ualberta.ca
More informationA DIRECT APPROACH TO ROBUST DEEP LEARNING USING ADVERSARIAL NETWORKS
A DIRECT APPROACH TO ROBUST DEEP LEARNING USING ADVERSARIAL NETWORKS Anonymous authors Paper under double-blind review ABSTRACT Deep neural networks have been shown to perform well in many classical machine
More informationarxiv: v1 [stat.ml] 31 Jan 2018
EVALUATING THE ROBUSTNESS OF NEURAL NET- WORKS: AN EXTREME VALUE THEORY APPROACH arxiv:1801.10578v1 [stat.ml] 31 Jan 2018 Tsui-Wei Weng 1, Huan Zhang 2, Pin-Yu Chen 3, Jinfeng Yi 4, Dong Su 3, Yupeng Gao
More informationAdversarial Examples
Adversarial Examples presentation by Ian Goodfellow Deep Learning Summer School Montreal August 9, 2015 In this presentation. - Intriguing Properties of Neural Networks. Szegedy et al., ICLR 2014. - Explaining
More informationNeural Networks in an Adversarial Setting and Ill-Conditioned Weight Space
Neural Networks in an Adversarial Setting and Ill-Conditioned Weight Space Abhishek Sinha 1 [0000 0002 3598 480X], Mayank Singh 1 [0000 0001 7261 6347], and Balaji Krishnamurthy 1[0000 0003 0464 536X]
More informationDifferentiable Abstract Interpretation for Provably Robust Neural Networks
Matthew Mirman 1 Timon Gehr 1 Martin Vechev 1 Abstract We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which
More informationDCN: Detector-Corrector Network Against Evasion Attacks on Deep Neural Networks
DCN: Detector-Corrector Network Against Evasion Attacks on Deep Neural Networks Jing Wen Dept. of Computer Science The University of Hong Kong Email: jwen@cs.hku.hk Lucas C.K. Hui Hong Kong Applied Science
More informationarxiv: v2 [stat.ml] 4 Dec 2018
Large Margin Deep Networks for Classification Gamaleldin F. Elsayed Google Research Dilip Krishnan Google Research Hossein Mobahi Google Research Kevin Regan Google Research arxiv:1803.05598v2 [stat.ml]
More informationarxiv: v3 [cs.ne] 10 Mar 2017
ROBUSTNESS TO ADVERSARIAL EXAMPLES THROUGH AN ENSEMBLE OF SPECIALISTS Mahdieh Abbasi & Christian Gagné Computer Vision and Systems Laboratory, Electrical and Computer Engineering Department Université
More informationAD V-BNN: IMPROVED ADVERSARIAL DEFENSE THROUGH ROBUST BAYESIAN NEURAL NETWORK
AD V-BNN: IMPROVED ADVERSARIAL DEFENSE THROUGH ROBUST BAYESIAN NEURAL NETWORK Xuanqing Liu 1, Yao Li 2,, Chongruo Wu 3, & Cho-Jui Hsieh 1 1 : Department of Computer Science, UCLA {xqliu,choheish}@cs.ucla.edu
More informationarxiv: v1 [cs.lg] 3 Jan 2018
Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space Mayank Singh 1, Abhishek Sinha 1, and Balaji Krishnamurthy 1 1 Adobe Systems Inc, Noida, India arxiv:1801.00905v1 [cs.lg] 3 Jan 2018
More informationDistirbutional robustness, regularizing variance, and adversaries
Distirbutional robustness, regularizing variance, and adversaries John Duchi Based on joint work with Hongseok Namkoong and Aman Sinha Stanford University November 2017 Motivation We do not want machine-learned
More informationarxiv: v1 [cs.lg] 30 Nov 2018
Adversarial Examples as an Input-Fault Tolerance Problem Angus Galloway 1,2, Anna Golubeva 3,4, and Graham W. Taylor 1,2 arxiv:1811.12601v1 [cs.lg] Nov 2018 1 School of Engineering, University of Guelph
More informationUniversal Adversarial Networks
1 Universal Adversarial Networks Jamie Hayes University College London j.hayes@cs.ucl.ac.uk Abstract Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally
More informationAdversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution Dimitrios I. Diochnos University of Virginia diochnos@virginia.edu Saeed Mahloujifar University of Virginia
More informationarxiv: v1 [cs.lg] 2 Nov 2018
Semidefinite relaxations for certifying robustness to adversarial examples arxiv:1811.01057v1 [cs.lg] 2 Nov 2018 Aditi Raghunathan, Jacob Steinhardt and Percy Liang Stanford University {aditir, jsteinhardt,
More informationMax-Mahalanobis Linear Discriminant Analysis Networks
Tianyu Pang 1 Chao Du 1 Jun Zhu 1 Abstract A deep neural network (DNN) consists of a nonlinear transformation from an input to a feature representation, followed by a common softmax linear classifier.
More informationTOWARDS THE FIRST ADVERSARIALLY ROBUST
TOWARDS THE FIRST ADVERSARIALLY ROBUST NEURAL NETWORK MODEL ON MNIST Anonymous authors Paper under double-blind review TRACT Despite much effort, deep neural networks remain highly susceptible to tiny
More informationarxiv: v3 [cs.cv] 20 Sep 2018
TOWARDS THE FIRST ADVERSARIALLY ROBUST NEURAL NETWORK MODEL ON MNIST arxiv:1805.09190v3 [cs.cv] 20 Sep 2018 Lukas Schott 1-3, Jonas Rauber 1-3, Matthias Bethge 1,3,4 & Wieland Brendel 1,3 1 Centre for
More informationarxiv: v1 [cs.lg] 4 Mar 2019
A Fundamental Performance Limitation for Adversarial Classification Abed AlRahman Al Makdah, Vaibhav Katewa, and Fabio Pasqualetti arxiv:1903.01032v1 [cs.lg] 4 Mar 2019 Abstract Despite the widespread
More informationarxiv: v1 [cs.cv] 2 Aug 2016
A study of the effect of JPG compression on adversarial images arxiv:1608.00853v1 [cs.cv] 2 Aug 2016 Gintare Karolina Dziugaite Department of Engineering University of Cambridge Daniel M. Roy Department
More informationarxiv: v2 [stat.ml] 23 May 2017 Abstract
The Space of Transferable Adversarial Examples Florian Tramèr 1, Nicolas Papernot 2, Ian Goodfellow 3, Dan Boneh 1, and Patrick McDaniel 2 1 Stanford University, 2 Pennsylvania State University, 3 Google
More informationarxiv: v1 [cs.lg] 26 Feb 2018
Tianyu Pang 1 Chao Du 1 Jun Zhu 1 arxiv:180.09308v1 [cs.lg] 6 Feb 018 Abstract A deep neural network (DNN) consists of a nonlinear transformation from an input to a feature representation, followed by
More informationOn Lyapunov exponents and adversarial perturbations
On Lyapunov exponents and adversarial perturbations Vinay Uday Prabhu UnifyID Inc San Francisco, 9417 vinay@unify.id John Whaley UnifyID Inc San Francisco, 9417 john@unify.id ABSTRACT In this paper, we
More informationRegularizing Deep Networks Using Efficient Layerwise Adversarial Training
Related Work Many approaches have been proposed to regularize the training procedure of very deep networks. Early stopping and statistical techniques like weight decay are commonly used to prevent overfitting.
More informationarxiv: v4 [cs.cr] 27 Jan 2018
A Learning and Masking Approach to Secure Learning Linh Nguyen, Sky Wang, Arunesh Sinha University of Michigan, Ann Arbor {lvnguyen,skywang,arunesh}@umich.edu arxiv:1709.04447v4 [cs.cr] 27 Jan 2018 ABSTRACT
More informationIs Robustness the Cost of Accuracy? A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
Is Robustness the Cost of Accuracy? A Comprehensive Study on the Robustness of 18 Deep Image Classification Models Dong Su 1*, Huan Zhang 2*, Hongge Chen 3, Jinfeng Yi 4, Pin-Yu Chen 1, and Yupeng Gao
More informationarxiv: v1 [cs.cv] 21 Jul 2017
CONFIDENCE ESTIMATION IN DEEP NEURAL NETWORKS VIA DENSITY MODELLING Akshayvarun Subramanya Suraj Srinivas R.Venkatesh Babu Video Analytics Lab, Department of Computational and Data Sciences Indian Institute
More informationAdversarial Noise Attacks of Deep Learning Architectures Stability Analysis via Sparse Modeled Signals
Noname manuscript No. (will be inserted by the editor) Adversarial Noise Attacks of Deep Learning Architectures Stability Analysis via Sparse Modeled Signals Yaniv Romano Aviad Aberdam Jeremias Sulam Michael
More informationA QUANTITATIVE MEASURE OF GENERATIVE ADVERSARIAL NETWORK DISTRIBUTIONS
A QUANTITATIVE MEASURE OF GENERATIVE ADVERSARIAL NETWORK DISTRIBUTIONS Dan Hendrycks University of Chicago dan@ttic.edu Steven Basart University of Chicago xksteven@uchicago.edu ABSTRACT We introduce a
More informationClassification of Hand-Written Digits Using Scattering Convolutional Network
Mid-year Progress Report Classification of Hand-Written Digits Using Scattering Convolutional Network Dongmian Zou Advisor: Professor Radu Balan Co-Advisor: Dr. Maneesh Singh (SRI) Background Overview
More informationStochastic Variance Reduction for Nonconvex Optimization. Barnabás Póczos
1 Stochastic Variance Reduction for Nonconvex Optimization Barnabás Póczos Contents 2 Stochastic Variance Reduction for Nonconvex Optimization Joint work with Sashank Reddi, Ahmed Hefny, Suvrit Sra, and
More informationLearning to Attack: Adversarial Transformation Networks
Learning to Attack: Adversarial Transformation Networks Shumeet Baluja and Ian Fischer Google Research Google, Inc. Abstract With the rapidly increasing popularity of deep neural networks for image recognition
More informationarxiv: v1 [cs.lg] 9 Oct 2018
The Adversarial Attack and Detection under the Fisher Information Metric Chenxiao Zhao East China Normal University 51174506043@stu.ecnu.edu.cn P. Thomas Fletcher University of Utah fletcher@sci.utah.edu
More informationDeep Relaxation: PDEs for optimizing Deep Neural Networks
Deep Relaxation: PDEs for optimizing Deep Neural Networks IPAM Mean Field Games August 30, 2017 Adam Oberman (McGill) Thanks to the Simons Foundation (grant 395980) and the hospitality of the UCLA math
More informationFormal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation Matthias Hein and Maksym Andriushchenko Department of Mathematics and Computer Science Saarland University, Saarbrücken
More informationEVALUATING ROBUSTNESS OF NEURAL NETWORKS
EVALUATING ROBUSTNESS OF NEURAL NETWORKS WITH MIXED INTEGER PROGRAMMING Anonymous authors Paper under double-blind review ABSTRACT Neural networks trained only to optimize for training accuracy can often
More informationImportance Reweighting Using Adversarial-Collaborative Training
Importance Reweighting Using Adversarial-Collaborative Training Yifan Wu yw4@andrew.cmu.edu Tianshu Ren tren@andrew.cmu.edu Lidan Mu lmu@andrew.cmu.edu Abstract We consider the problem of reweighting a
More informationarxiv: v2 [stat.ml] 21 Feb 2017
ON DETECTING ADVERSARIAL PERTURBATIONS Jan Hendrik Metzen & Tim Genewein & Volker Fischer & Bastian Bischoff Bosch Center for Artificial Intelligence, Robert Bosch GmbH Robert-Bosch-Campus 1, 71272 Renningen,
More informationGenerative Adversarial Networks (GANs) Ian Goodfellow, OpenAI Research Scientist Presentation at Berkeley Artificial Intelligence Lab,
Generative Adversarial Networks (GANs) Ian Goodfellow, OpenAI Research Scientist Presentation at Berkeley Artificial Intelligence Lab, 2016-08-31 Generative Modeling Density estimation Sample generation
More informationwhat can deep learning learn from linear regression? Benjamin Recht University of California, Berkeley
what can deep learning learn from linear regression? Benjamin Recht University of California, Berkeley Collaborators Joint work with Samy Bengio, Moritz Hardt, Michael Jordan, Jason Lee, Max Simchowitz,
More informationDecoupled Networks. Abstract. 1. Introduction
Decoupled Netorks Weiyang Liu 1*, Zhen Liu 1*, Zhiding Yu 2, Bo Dai 1, Rongmei Lin 3, Yisen Wang 1,4, James M. Rehg 1, Le Song 1,5 1 Georgia Institute of Technology 2 NVIDIA 3 Emory University 4 Tsinghua
More informationMaxout Networks. Hien Quoc Dang
Maxout Networks Hien Quoc Dang Outline Introduction Maxout Networks Description A Universal Approximator & Proof Experiments with Maxout Why does Maxout work? Conclusion 10/12/13 Hien Quoc Dang Machine
More informationarxiv: v1 [cs.lg] 13 Mar 2017 ABSTRACT
Blocking Transferability of Adversarial Examples in Black-Box Learning Systems arxiv:73.438v [cs.lg] 3 Mar 27 ABSTRACT Hossein Hosseini hosseinh@uw.edu Baosen Zhang zhangbao@uw.edu Yize Chen yizechen@uw.edu
More informationarxiv: v2 [cs.cr] 11 Sep 2017
MagNet: a Two-Pronged Defense against Adversarial Examples arxiv:1705.09064v2 [cs.cr] 11 Sep 2017 ABSTRACT Dongyu Meng ShanghaiTech University mengdy@shanghaitech.edu.cn Deep learning has shown impressive
More informationCertified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana Columbia University Abstract Adversarial examples
More informationDeep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks Emily Denton 1, Soumith Chintala 2, Arthur Szlam 2, Rob Fergus 2 1 New York University 2 Facebook AI Research Denotes equal
More informationDistillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Accepted to the 37th IEEE Symposium on Security & Privacy, IEEE 2016. San Jose, CA. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks 1 Nicolas Papernot, Patrick McDaniel,
More informationAdversarial Sample Detection for Deep Neural Network through Model Mutation Testing
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing Jingyi Wang, Guoliang Dong, Jun Sun, Xinyu Wang, Peixin Zhang Singapore University of Technology and Design Zhejiang
More informationThe K-FAC method for neural network optimization
The K-FAC method for neural network optimization James Martens Thanks to my various collaborators on K-FAC research and engineering: Roger Grosse, Jimmy Ba, Vikram Tankasali, Matthew Johnson, Daniel Duckworth,
More informationarxiv: v1 [cs.lg] 4 Oct 2018
Improved generalization bounds for robust learning Idan Attias Aryeh Kontorovich Department of Computer Science Ben-Gurion University of the Negev, Beer Sheva, Israel idanatti@post.bgu.ac.il karyeh@cs.bgu.ac.il
More informationarxiv: v3 [cs.cv] 8 Jul 2018
On the Robustness of Semantic Segmentation Models to Adversarial Attacks Anurag Arnab Ondrej Miksik Philip H.S. Torr University of Oxford {anurag.arnab, ondrej.miksik, philip.torr}@eng.ox.ac.uk arxiv:1711.09856v3
More informationSupport Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar
Data Mining Support Vector Machines Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar 02/03/2018 Introduction to Data Mining 1 Support Vector Machines Find a linear hyperplane
More informationarxiv: v3 [cs.lg] 6 Nov 2015
Bayesian Dark Knowledge Anoop Korattikara, Vivek Rathod, Kevin Murphy Google Research {kbanoop, rathodv, kpmurphy}@google.com Max Welling University of Amsterdam m.welling@uva.nl arxiv:1506.04416v3 [cs.lg]
More informationOptimization geometry and implicit regularization
Optimization geometry and implicit regularization Suriya Gunasekar Joint work with N. Srebro (TTIC), J. Lee (USC), D. Soudry (Technion), M.S. Nacson (Technion), B. Woodworth (TTIC), S. Bhojanapalli (TTIC),
More informationOptimization and Gradient Descent
Optimization and Gradient Descent INFO-4604, Applied Machine Learning University of Colorado Boulder September 12, 2017 Prof. Michael Paul Prediction Functions Remember: a prediction function is the function
More informationEstimation and Optimization: Gaps and Bridges. MURI Meeting June 20, Laurent El Ghaoui. UC Berkeley EECS
MURI Meeting June 20, 2001 Estimation and Optimization: Gaps and Bridges Laurent El Ghaoui EECS UC Berkeley 1 goals currently, estimation (of model parameters) and optimization (of decision variables)
More informationOPTIMIZATION METHODS IN DEEP LEARNING
Tutorial outline OPTIMIZATION METHODS IN DEEP LEARNING Based on Deep Learning, chapter 8 by Ian Goodfellow, Yoshua Bengio and Aaron Courville Presented By Nadav Bhonker Optimization vs Learning Surrogate
More informationarxiv: v1 [cs.lg] 14 Jun 2015
Bayesian Dark Knowledge Anoop Korattikara, Vivek Rathod, Kevin Murphy Google Inc. {kbanoop, rathodv, kpmurphy}@google.com Max Welling University of Amsterdam m.welling@uva.nl arxiv:1506.04416v1 [cs.lg]
More informationRobustness in GANs and in Black-box Optimization
Robustness in GANs and in Black-box Optimization Stefanie Jegelka MIT CSAIL joint work with Zhi Xu, Chengtao Li, Ilija Bogunovic, Jonathan Scarlett and Volkan Cevher Robustness in ML noise Generator Critic
More informationLogistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Logistic Regression Matt Gormley Lecture 8 Feb. 12, 2018 1 10-601 Introduction
More informationCharacterizing and evaluating adversarial examples for Offline Handwritten Signature Verification
1 Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification Luiz G. Hafemann, Robert Sabourin, Member, IEEE, and Luiz S. Oliveira. arxiv:1901.03398v1 [cs.cv] 10
More informationLipschitz Properties of General Convolutional Neural Networks
Lipschitz Properties of General Convolutional Neural Networks Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD Joint work with Dongmian Zou (UMD), Maneesh
More informationWarm up: risk prediction with logistic regression
Warm up: risk prediction with logistic regression Boss gives you a bunch of data on loans defaulting or not: {(x i,y i )} n i= x i 2 R d, y i 2 {, } You model the data as: P (Y = y x, w) = + exp( yw T
More informationLarge-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima J. Nocedal with N. Keskar Northwestern University D. Mudigere INTEL P. Tang INTEL M. Smelyanskiy INTEL 1 Initial Remarks SGD
More informationLecture 7 Convolutional Neural Networks
Lecture 7 Convolutional Neural Networks CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 17, 2017 We saw before: ŷ x 1 x 2 x 3 x 4 A series of matrix multiplications:
More information