WHY DEEP NEURAL NETWORKS FOR FUNCTION APPROXIMATION SHIYU LIANG THESIS

Similar documents
A summary of Deep Learning without Poor Local Minima

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Deep Belief Networks are Compact Universal Approximators

Nearly-tight VC-dimension bounds for piecewise linear neural networks

Bits of Machine Learning Part 1: Supervised Learning

arxiv: v1 [cs.lg] 30 Sep 2018

Maxout Networks. Hien Quoc Dang

Lecture 3 Feedforward Networks and Backpropagation

arxiv: v2 [stat.ml] 7 Jun 2014

On the complexity of shallow and deep neural network classifiers

arxiv: v1 [cs.lg] 6 Nov 2017

Introduction to Machine Learning Spring 2018 Note Neural Networks

arxiv: v1 [cs.lg] 8 Mar 2017

Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

arxiv: v3 [cs.lg] 8 Jun 2018

Global Optimality in Matrix and Tensor Factorization, Deep Learning & Beyond

The Power of Approximating: a Comparison of Activation Functions

Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

Deep Feedforward Networks. Seung-Hoon Na Chonbuk National University

Optimization Landscape and Expressivity of Deep CNNs

Lecture 3 Feedforward Networks and Backpropagation

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Learning Deep Architectures for AI. Part II - Vijay Chakilam

Neural networks COMS 4771

Algorithms for Learning Good Step Sizes

Nonlinear Models. Numerical Methods for Deep Learning. Lars Ruthotto. Departments of Mathematics and Computer Science, Emory University.

Machine Learning

Deep Feedforward Networks. Han Shao, Hou Pong Chan, and Hongyi Zhang

Continuous Neural Networks

RegML 2018 Class 8 Deep learning

Advanced Machine Learning

Understanding Deep Neural Networks with Rectified Linear Units

Topic 3: Neural Networks

Deep Feedforward Networks. Sargur N. Srihari

When can Deep Networks avoid the curse of dimensionality and other theoretical puzzles

Understanding Neural Networks : Part I

Large-Scale Feature Learning with Spike-and-Slab Sparse Coding

Expressive Efficiency and Inductive Bias of Convolutional Networks:

Ch.6 Deep Feedforward Networks (2/3)

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Deep Belief Networks are compact universal approximators

Deep Feedforward Networks

Expressive Efficiency and Inductive Bias of Convolutional Networks:

From CDF to PDF A Density Estimation Method for High Dimensional Data

Expressiveness of Rectifier Networks

Why is Deep Learning so effective?

Very Deep Residual Networks with Maxout for Plant Identification in the Wild Milan Šulc, Dmytro Mishkin, Jiří Matas

Neural Networks and Rational Functions

UNDERSTANDING LOCAL MINIMA IN NEURAL NET-

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

On Some Mathematical Results of Neural Networks

Multilayer feedforward networks are universal approximators

CS489/698: Intro to ML

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Deep Feedforward Networks

Intelligent Systems Discriminative Learning, Neural Networks

Artificial Neural Networks

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018

Notes on Adversarial Examples

Faster Training of Very Deep Networks Via p-norm Gates

Global Optimality in Matrix and Tensor Factorizations, Deep Learning and More

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

Neural Networks and Deep Learning

Expressiveness of Rectifier Networks

Failures of Gradient-Based Deep Learning

Lecture 5: Logistic Regression. Neural Networks

On Ridge Functions. Allan Pinkus. September 23, Technion. Allan Pinkus (Technion) Ridge Function September 23, / 27

Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review

Nonparametric regression using deep neural networks with ReLU activation function

Error bounds for approximations with deep ReLU networks

Reading Group on Deep Learning Session 1

Lecture 17: Neural Networks and Deep Learning

Introduction to Convolutional Neural Networks (CNNs)

I-theory on depth vs width: hierarchical function composition

Feedforward Neural Networks. Michael Collins, Columbia University

SGD and Deep Learning

Course 395: Machine Learning - Lectures

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

arxiv: v4 [math.oc] 5 Jan 2016

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Grundlagen der Künstlichen Intelligenz

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Neural Networks: A Very Brief Tutorial

Support Vector Machines

arxiv: v7 [cs.ne] 2 Sep 2014

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Jakub Hajic Artificial Intelligence Seminar I

Learning Real and Boolean Functions: When Is Deep Better Than Shallow

Approximation theory in neural networks

Artificial Neural Networks

Deep Recurrent Neural Networks

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16

Fuzzy histograms and density estimation

Function Approximation

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!!

Artificial Neural Networks

Neural Networks: Introduction

MODULE -4 BAYEIAN LEARNING

Transcription:

c 2017 Shiyu Liang

WHY DEEP NEURAL NETWORKS FOR FUNCTION APPROXIMATION BY SHIYU LIANG THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering in the Graduate College of the University of Illinois at Urbana-Champaign, 2017 Urbana, Illinois Adviser: Professor R. Srikant

ABSTRACT Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to approximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, we consider univariate functions on a bounded interval and require a neural network to achieve an approximation error of uniformly over the interval. We show that shallow networks i.e., networks whose depth does not depend on ) require Ωpoly1/)) neurons while deep networks i.e., networks whose depth grows with 1/) require Opolylog1/)) neurons. We then extend these results to certain classes of important multivariate functions. Our results are derived for neural networks which use a combination of rectifier linear units s) and binary step units, two of the most popular types of activation functions. Our analysis builds on a simple observation: the multiplication of two bits can be represented by a. ii

To My Father and Mother iii

ACKNOWLEDGMENTS This work would not have been possible without the guidance of my adviser, Prof. R. Srikant, who contributed many valuable ideas to this thesis. iv

TABLE OF CONTENTS CHAPTER 1 INTRODUCTION........................ 1 CHAPTER 2 PRELIMINARIES AND PROBLEM STATEMENT...... 3 2.1 Feedforward Neural Networks...................... 3 2.2 Problem Statement............................ 4 CHAPTER 3 UPPER BOUNDS ON FUNCTION APPROXIMATIONS.. 6 3.1 Approximation of Univariate Functions................. 6 3.2 Approximation of Multivariate Functions................ 13 CHAPTER 4 LOWER BOUNDS ON FUNCTION APPROXIMATIONS.. 16 CHAPTER 5 CONCLUSIONS......................... 18 REFERENCES................................... 19 APPENDIX A PROOFS............................. 21 A.1 Proof of Corollary 5............................ 21 A.2 Proof of Corollary 6............................ 22 A.3 Proof of Corollary 7............................ 24 A.4 Proof of Theorem 8............................ 27 A.5 Proof of Theorem 9............................ 29 A.6 Proof of Theorem 11........................... 30 A.7 Proof of Corollary 12........................... 33 A.8 Proof of Corollary 13........................... 34 A.9 Proof of Corollary 14........................... 36 v

CHAPTER 1 INTRODUCTION Neural networks have drawn significant interest from the machine learning community, especially due to their recent empirical successes see the surveys [1]). Neural networks are used to build state-of-art systems in various applications such as image recognition, speech recognition, natural language processing and others see, [2], [3], [4], for example). The result that neural networks are universal approximators is one of the theoretical results most frequently cited to justify the use of neural networks in these applications. Numerous results have shown the universal approximation property of neural networks in approximations of different function classes see, e.g., [5], [6], [7], [8], [9], [10], [11]). All these results and many others provide upper bounds on the network size and assert that small approximation error can be achieved if the network size is sufficiently large. More recently, there has been much interest in understanding the approximation capabilities of deep versus shallow networks. It has been shown that there exist deep sum-product networks which cannot be approximated by shallow sum-product networks unless they use an exponentially larger amount of units or neurons [12]. Besides, it has been shown that the number of linear regions increases exponentially with the number of layers in the neural network [13]. In [14], the author has established such a result for neural networks, which is the subject of this thesis. In [15], the authors have shown that, to approximate a specific function, a two-layer network requires an exponential number of neurons in the input dimension, while a three-layer network requires a polynomial number of neurons. These recent papers demonstrate the power of deep networks by showing that depth can lead to an exponential reduction in the number of neurons required, for specific functions 1

or specific neural networks. Our goal here is different: we are interested in function approximation specifically and would like to show that for a given upper bound on the approximation error, shallow networks require exponentially more neurons than deep networks for a large class of functions. The multilayer neural networks considered in this thesis are allowed to use either rectifier linear units ) or binary step units BSU), or any combination of the two. The main contributions of this thesis are: We have shown that, for -approximation of functions with enough piecewise smoothness, a multilayer neural network which uses Θlog1/)) layers only needs Opoly log1/)) neurons, while Ωpoly1/)) neurons are required by neural networks with olog1/)) layers. In other words, shallow networks require exponentially more neurons than a deep network to achieve the level of accuracy for function approximation. We have shown that for all differentiable and strongly convex functions, multilayer neural networks need Ωlog1/)) neurons to achieve an -approximation. Thus, our results for deep networks are tight. The outline of this thesis is as follows. In Chapter 2, we present necessary definitions and the problem statement. In Chapter 3, we present upper bounds on network size, while the lower bound is provided in Chapter 4. Conclusions are presented in Chapter 5. 2

CHAPTER 2 PRELIMINARIES AND PROBLEM STATEMENT In this chapter, we present definitions on feedforward neural networks and formally present the problem statement. 2.1 Feedforward Neural Networks A feedforward neural network is composed of layers of computational units and defines a unique function f : R d R. Let L denote the number of hidden layers, N l denote the number of units of layer l, N = L l=1 N l denote the size of the neural network, vector x = x 1),..., x d) ) denote the input of neural network, z l j denote the output of the jth unit in layer l, w l i,j denote the weight of the edge connecting unit i in layer l and unit j in layer l + 1 and b l j denote the bias of the unit j in layer l. Then outputs between layers of the feedforward neural network can be characterized by the following iterations: z l+1 j = σ Nl i=1 w l i,jz l i + b l+1 j ), l [L 1], j [N l+1 ], with d ) input layer: zj 1 = σ wi,jx 0 i) + b 1 j, j [N 1 ], i=1 NL ) output layer: fx) = σ wi,jz L i L + b L+1 j. i=1 3

Here, σ ) denotes the activation function and [n] denotes the index set [n] = {1,..., n}. In this thesis, we only consider two important types of activation functions: Rectifier linear unit: σx) = max{0, x}, x R. Binary step unit: σx) = I{x 0}, x R. We denote the number of layers and the number of neurons in the network as the depth and the size of the feedforward neural network, respectively. We use the set FN, L) to denote the function set containing all feedforward neural networks of depth L, size N and composed of a combination of rectifier linear units s) and binary step units. We say one feedforward neural network is deeper than the other network if and only if it has a larger depth. Throughout this thesis, the terms feedforward neural network and multilayer neural network are used interchangeably. 2.2 Problem Statement In this thesis, we focus on bounds on the size of the feedforward neural network function approximation. Given a function f, our goal is to understand whether a multilayer neural network f of depth L and size N exists such that it solves Specifically, we aim to answer the following questions: min f f. 2.1) f FN,L) 1 Does there exist L) and N) such that 2.1) is satisfied? We will refer to such L) and N) as upper bounds on the depth and size of the required neural network. 2 Given a fixed depth L, what is the minimum value of N such that 2.1) is satisfied? We will refer to such an N as a lower bound on the size of a neural network of a given depth L. The first question asks what depth and size are sufficient to guarantee an -approximation. The second question asks, for a fixed depth, what is the minimum size of a neural 4

network required to guarantee an -approximation. Obviously, tight bounds in the answers to these two questions provide tight bounds on the network size and depth required for function approximation. Besides, solutions to these two questions together can be further used to answer the following question: If a deeper neural network of size N d and a shallower neural network of size N s are used to approximate the same function with the same error, then how fast does the ratio N d /N s decay to zero as the error decays to zero? 5

CHAPTER 3 UPPER BOUNDS ON FUNCTION APPROXIMATIONS In this chapter, we present upper bounds on the size of the multilayer neural network which are sufficient for function approximation. Before stating the results, some notations and terminology deserve further explanation. First, the upper bound on the network size represents the number of neurons required at most for approximating a given function with a certain error. Secondly, the notion of the approximation is the L distance: for two functions f and g, the L distance between these two functions is the maximum point-wise disagreement over the cube [0, 1] d. 3.1 Approximation of Univariate Functions In this section, we present all results on approximating univariate functions. We first present a theorem on the size of the network for approximating a simple quadratic function. As part of the proof, we present the structure of the multilayer feedforward neural network used and show how the neural network parameters are chosen. Results on approximating general functions can be found in Theorems 2 and 4. Theorem 1. For function fx) = x 2, x [0, 1], there exists a multilayer neural network fx) with O ) log ) 1 layers, O log 1 binary step units and O log 1 ) rectifier linear units such that fx) fx), x [0, 1]. Proof. The proof is composed of three parts. For any x [0, 1], we first use the multilayer neural network to approximate x by its finite binary expansion n We then construct a 2-layer neural network to implement function f n x i ) 2. i x i. 6

x x x 0 x x 0 x1 2 x n 1 xi i=1 1 1 1 + 1 2 1 2 1 +... 1 2 n 1 2 n 1 + x 0 x 1 x n x n xi 1st layer 2nd layer... nth layer : binary step unit + : adder Figure 3.1: An n-layer neural network structure for finding the binary expansion of a number in [0, 1] For each x [0, 1], x can be denoted by its binary expansion x = x i, where x i {0, 1} for all i 0. It is straightforward to see that the n-layer neural network shown in Figure 3.1 can be used to find x 0,..., x n. Next, we implement the function fx) = f n x i ) 2 by a two-layer neural network. Since fx) = x 2, we then rewrite fx) as i follows: n ) 2 [ x i n 1 n fx) = = x i n = max 0, 2x i 1) + 1 n x j 2 j j=0 The third equality follows from the fact that x i {0, 1} for all i. Therefore, the function fx) can be implemented by a multilayer network containing a deep structure shown in Figure 3.1 and another hidden layer with n rectifier linear units. This multilayer neural network has On) layers, On) binary step units and On) rectifier linear units. j=0 x j 2 j )] ). 7

Finally, we consider the approximation error of this multilayer neural network, fx) fx) n ) 2 = x i x2 2 n x x i = 2 x i i=n+1 1 2. n 1 Therefore, in order to achieve -approximation error, one should choose n = log 2 1 + 1. In summary, the deep neural network has O log 1 ) layers, O log 1 ) binary step units and O log 1 )) rectifier linear units. Next, a theorem on the size of the network for approximating general polynomials is given as follows. Theorem 2. For polynomials fx) = p a ix i, x [0, 1] and p i=1 a i 1, there exists a multilayer neural network fx) with O ) p + log p layers, O log p ) binary step units and O p log ) p rectifier linear units such that fx) fx), x [0, 1]. Proof. The proof is composed of three parts. We first use the deep structure shown in Figure 3.1 to find the n-bit binary expansion n a ix i of x. Then we construct a multilayer network to approximate polynomials g i x) = x i, i = 1,..., p. Finally, we analyze the approximation error. Using the same deep structure shown in Figure 3.1, we could find the binary expansion sequence {x 0,..., x n }. In this step, we used n binary step units in total. Now we rewrite g m+1 n x i ), 2 n g m+1 n x i ) = = [ n n )] x j 1 2 g x i j m j=0 [ n n ) ] max 2x j 1) + 1 2 g x i j m, 0. 3.1) j=0 Clearly, equation 3.1) defines iterations between the outputs of neighbor layers. Therefore, the deep neural network shown in Figure 3.2 can be used to implement the iteration given by 3.1). Further, to implement this network, one should use Op) layers with Opn) rectifier linear units in total. We now define the output of 8

x 0 x 0 x 0 x 0 x 0 x 0 x 0 x 0 x 0 x 0 x 1 x 2... x n x 1... x 2 x n... x 1 x 2 x n x 1... x 2 x n... x 1 x 2 x n x 1 x 2... x n...... x 1 x 2 x n x 1... x 2 x n... x 1 x 2 x n x 1 x 2... x n 1 g 1 X n x i!... g 2 X n x i!... g 3 X n x i! x i X n g p 1!... g p X n x i! Figure 3.2: The implementation of polynomial function the multilayer neural network as fx) = p a n x j ig i j=0 network, the approximation error is 2 j ) fx) fx) p n ) x j p = a i g i a 2 j i x i j=0 [ p n ) ] a i g x j i x i 2 j j=0. For this multilayer p 2 n 1. This indicates that, to achieve -approximation error, one should choose n = log p + 1. Besides, since we used On + p) layers with On) binary step units and Opn) rectifier linear units in total, this multilayer neural network thus has O ) p + log p layers, O ) log p binary step units and O p log p ) rectifier linear units. In Theorem 2, we have shown an upper bound on the size of multilayer neural network for approximating polynomials. We can easily observe that the number of neurons in the network grows as p log p with respect to p, the degree of the polynomial. We note that both [16] and [10] showed the sizes of the networks grow exponentially with respect to p if only 3-layer neural networks are allowed to be used in approximating polynomials. Besides, every function f with p + 1 continuous derivatives on a bounded set can be approximated easily with a polynomial with degree p. This is shown by the following well known result of Lagrangian interpolation. By this result, we could 9

further generalize Theorem 2. The proof can be found in the reference [17]. Lemma 3 Lagrangian interpolation at Chebyshev points). If a function f is defined at points z 0,..., z n, z i = cosi + 1/2)π/n + 1)), i [n], there exists a polynomial of degree not more than n such that P n z i ) = fz i ), i = 0,..., n. This polynomial is given by P n x) = n fz π i)l i x) where L i x) = n+1 x) x z i and )π n+1 z i) π n+1 x) = n j=0 x z j). Additionally, if f is continuous on [ 1, 1] and n + 1 times differentiable in 1, 1), then R n = f P n 1 f n+1), 2 n n + 1)! where f n) x) is the derivative of f of the nth order and the norm f is the l norm f = max x [ 1,1] fx). Then the upper bound on the network size for approximating more general functions follows directly from Theorem 2 and Lemma 3. Theorem 4. Assume that function f is continuous on [0, 1] and log 2 + 1 times differentiable in 0, 1). Let f n) denote the derivative of f of nth order and f = max x [0,1] fx). If f n) n! holds for all n [ ] log 2 + 1, then there exists a deep neural network f with O ) log ) 1 layers, O log 1 log ) ) binary step units and O 1 2 rectifier linear units such that f f. Proof. Let N = log 2. From Lemma 3, it follows that there exists polynomial PN of degree N such that for any x [0, 1], fx) P N x) f N+1) 2 N N + 1)! 1 2 N. Let x 0,..., x N denote the first N + 1 bits of the binary expansion of x and define N x fx) = P i N ). In the following, we first analyze the approximation error of 2 N f and next show the implementation of this function. Let x = N x i. The error 10

can now be upper bounded by fx) fx) = fx) P N x) fx) f x) + f x) P N x) N f 1) x x i + 1 2 1 N 2 + 1 N 2. N In the following, we describe the implementation of f by a multilayer neural network. Since P N is a polynomial of degree N, function f can be rewritten as fx) = P N N x i ) N N ) x i = c n g n, for some coefficients c 0,..., c N and g n = x n, n [N]. Hence, the multilayer neural network shown in the Figure 3.2 can be used to implement fx). Notice that the network uses ON) layers with ON) binary step units in total to decode x 0,...,x N and ON) layers with ON 2 ) rectifier linear units in total to construct the polynomial n=0 P N. Substituting N = log 2, we have proved the theorem. Remark: Note that, to implement the architecture in Figure 3.2 using the definition of a feedforward neural network in Chapter 2, we need the g i, i [p] at the output. This can be accomplished by using Op 2 ) additional s. Since p = Olog1/)), this does not change the order result in Theorem 4. Theorem 4 shows that any function f with enough smoothness can be approximated by a multilayer neural network containing polylog 1 ) neurons with error. Further, Theorem 4 can be used to show that if functions h 1,...,h k are smooth enough, then linear combinations, multiplications and compositions of these functions can as well be approximated by multilayer neural networks containing polylog 1 ) neurons with error. Specific results are given in the following corollaries. Corollary 5 Function addition). Suppose that all functions h 1,..., h k satisfy the conditions in Theorem 4, and the vector β {ω R k : ω 1 = 1}, then for the linear combination f = k i=1 β ih i, there exists a deep neural network f with O ) log 1 11

layers, O log ) ) log ) 1 binary step units and O 1 2 fx) f, x [0, 1]. rectifier linear units such that Remark: Clearly, Corollary 5 follows directly from the fact that the linear combination f satisfies the conditions in Theorem 4 if all the functions h 1,...,h k satisfy those conditions. We note here that the upper bound on the network size for approximating linear combinations is independent of k, the number of component functions. Corollary 6 Function multiplication). Suppose that all functions h 1,...,h k are continuous on [0, 1] and 2 4k log 2 4k + 4k + 2 log 2 + 1 times differentiable in 0, 1). If h n) i n! holds for all i [k] and [ ] 2 n 4k log 2 4k + 4k + 2 log 2 + 1, then for the multiplication f = k i=1 h i, there exists a multilayer neural network f with O k log k + log 1 ) layers, O k log k + log 1 ) binary step units and O k log k) 2 + log 1 ) ) 2 rectifier linear units such that fx) fx), x [0, 1]. Corollary 7 Function composition). Suppose that all functions h 1,..., h k : [0, 1] [0, 1] satisfy the conditions in Theorem 4, then for the composition f = h 1 h 2... h k, there exists a multilayer neural network f with O k log k log 1 + log k ) ) log 1 2 layers, O k log k log 1 + log k ) ) log 1 2 binary step units and O k 2 log 1 2 ) ) ) + log 1 4 rectifier linear units such that fx) fx), x [0, 1]. Remark: Proofs of Corollaries 6 and 7 can be found in the Appendix. We observe that, in contrast to the case of linear combinations, the upper bound on the network size grows as k 2 log 2 k in the case of function multiplications and grows as k 2 log 1 in the case of function compositions where k is the number of component functions. ) 2 12

In this section, we have shown a polylog 1 ) upper bound on the network size for -approximation of both univariate polynomials and general univariate functions with enough smoothness. In addition, we have shown that linear combinations, multiplications and compositions of univariate functions with enough smoothness can as well be approximated with error by a multilayer neural network of size polylog 1 ). In the next section, we will show the upper bound on the network size for approximating multivariate functions. 3.2 Approximation of Multivariate Functions In this section, we present all results on approximating multivariate functions. We first present a theorem on the upper bound on the neural network size for approximating a product of multivariate linear functions. We next present a theorem on the upper bound on the neural network size for approximating general multivariate polynomial functions. Finally, similar to the results in the univariate case, we present the upper bound on the neural network size for approximating the linear combination, the multiplication and the composition of multivariate functions with enough smoothness. Theorem 8. Let W = {w R d : w 1 = 1}. For fx) = p i=1 w T i x ), x [0, 1] d and w i W, i = 1,..., p, there exists a deep neural network fx) with O ) p + log pd layers, O ) ) log pd binary step units and O pd log pd rectifier linear units such that fx) fx), x [0, 1] d. Theorem 8 shows an upper bound on the network size for -approximation of a product of multivariate linear functions. Furthermore, since any general multivariate polynomial can be viewed as a linear combination of products, the result on general multivariate polynomials directly follows from Theorem 8. Theorem 9. Let the multi-index vector α = α 1,..., α d ), the norm α = α 1 +...+α d, the coefficient C α = C α1...α d, the input vector x = x 1),..., x d) ) and the multinomial x α = x 1)α 1...x d)α d. For positive integer p and polynomial fx) = α: α p C αx α, 13

x [0, 1] d and α: α p C α 1, there exists a deep neural network fx) of depth O ) p + log dp and size Nd, p, ) such that fx) f x), where ) Nd, p, ) = p 2 p + d 1 log pd d 1. Remark: The proof is given in the Appendix. By further analyzing the results on the network size, we obtain the following results: a) fixing degree p, Nd, ) = O d p+1 log d ) as d and b) fixing input dimension d, Np, ) = O p d log p ) as p. Similar results on approximating multivariate polynomials were obtained by [16] and [10]. Reference [10] showed that one can use a 3-layer neural network to approximate any multivariate polynomial with degree p, dimension d and network size d p / 2. Reference [16] showed that one could use the gradient descent to train a 3-layer neural network of size d 2p / 2 to approximate any multivariate polynomial. However, Theorem 9 shows that the deep neural network could reduce the network size from O 1/) to O log 1 ) for the same error. Besides, for a fixed input dimension d, the size of the 3-layer neural network used by [16] and [10] grows exponentially with respect to the degree p. However, the size of the deep neural network shown in Theorem 9 grows only polynomially with respect to the degree. Therefore, the deep neural network could reduce the network size from Oexpp)) to Opolyp)) when the degree p becomes large. Theorem 9 shows an upper bound on the network size for approximating multivariate polynomials. Further, by combining Theorem 4 and Corollary 7, we could obtain an upper bound on the network size for approximating more general functions. The results are shown in the following corollary. Corollary 10. Assume that all univariate functions h 1,..., h k : [0, 1] [0, 1], k 1, satisfy the conditions in Theorem 4. Assume that the multivariate polynomial lx) : [0, 1] d [0, 1] is of degree p. For composition f = h 1 h 2... h k lx), there exists a multilayer neural network f of depth O p + log d + k log k log 1 + log k ) ) log 1 2 and of size Nk, p, d, ) such that fx) ) fx) for x [0, 1] d, where Nk, p, d, ) = O p 2 p + d 1 log pd d 1 + k2 log 1 2 + log ) 1 ) ) 4. 14

Remark: Corollary 10 shows an upper bound on network size for approximating compositions of multivariate polynomials and general univariate functions. The upper bound can be loose due to the assumption that lx) is a general multivariate polynomial of degree p. For some specific cases, the upper bound can be much smaller. We present two specific examples in the Appendix A.8 and A.9. In this section, we have shown a similar polylog 1 ) upper bound on the network size for -approximation of general multivariate polynomials and functions which are compositions of univariate functions and multivariate polynomials. The results in this chapter can be used to find a multilayer neural network of size polylog 1 ) which provides an approximation error of at most. In the next chapter, we will present lower bounds on the network size for approximating both univariate and multivariate functions. The lower bound together with the upper bound shows a tight bound on the network size required for function approximations. While we have presented results in both the univariate and multivariate cases for smooth functions, the results automatically extend to functions that are piecewise smooth, with a finite number of pieces. In other words, if the domain of the function is partitioned into regions, and the function is sufficiently smooth in the sense described in the foregoing theorems and corollaries) in each of the regions, then the results essentially remain unchanged except for an additional factor which will depend on the number of regions in the domain. 15

CHAPTER 4 LOWER BOUNDS ON FUNCTION APPROXIMATIONS In this chapter, we present lower bounds on the network size in function for certain classes of functions. Next, by combining the lower bounds and the upper bounds shown in the previous chapter, we could analytically show the advantages of deeper neural networks over shallower ones. Theorem 11 below is inspired by a similar result [18] for univariate quadratic functions, where it is stated without a proof. Here we show that the result extends to general multivariate strongly convex functions. Theorem 11. Assume function f : [0, 1] d R is differentiable and strongly convex with parameter µ. Assume the multilayer neural network f is composed of rectifier linear units and binary step units. If fx) fx), x [0, 1] d, then the network size N log 2 µ 16). Remark: The proof is in the Appendix A.6. Theorem 11 shows that every strongly convex function cannot be approximated with error by any multilayer neural network with rectifier linear units and binary step units and of size smaller than log 2 µ/) 4. Theorem 11 together with Theorem 1 directly shows that to approximate quadratic function fx) = x 2 with error, the network size should be of order Θ log 1 ). Further, by combining Theorem 11 and Theorem 4, we could analytically show the benefits of deeper neural networks. The result is given in the following corollary. Corollary 12. Assume that univariate function f satisfies conditions in both Theorem 4 and Theorem 11. If a neural network f s is of depth L s = o log 1 ), size Ns and fx) f s x), for x [0, 1], then there exists a deeper neural network f d x) of depth Θ log 1 ), size Nd = OL 2 s log 2 N s ) such that fx) f d x), x [0, 1]. 16

Remarks: i) The strong convexity requirement can be relaxed: the result obviously holds if the function is strongly concave and it also holds if the function consists of pieces which are strongly convex or strongly concave. ii) Corollary 12 shows that in the approximation of the same function, the size of the deep neural network N s is only of polynomially logarithmic order of the size of the shallow neural network N d, i.e., N d = OpolylogN s )). Similar results can be obtained for multivariate functions of the type considered in Section 3.2. 17

CHAPTER 5 CONCLUSIONS In this thesis, we have shown that an exponentially large number of neurons are needed for function approximation using shallow networks, when compared to deep networks. The results are established for a large class of smooth univariate and multivariate functions. Our results are established for the case of feedforward neural networks with s and binary step units. 18

REFERENCES [1] Y. Bengio, Learning deep architectures for AI, Foundations and Trends in Machine Learning, 2009. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, in NIPS, 2012. [3] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. C. Courville, and Y. Bengio, Maxout networks, ICML, 2013. [4] L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus, Regularization of neural networks using dropconnect, in ICML, 2013. [5] G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signals and Systems, 1989. [6] K. Hornik, M. Stinchcombe, and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, vol. 2, pp. 359 366, 1989. [7] K. I. Funahashi, On the approximate realization of continuous mappings by neural networks, Neural Networks, vol. 2, no. 3, pp. 183 192, 1989. [8] K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks, vol. 4, no. 2, pp. 251 257, 1991. [9] C. K. Chui and X. Li, Approximation by ridge functions and neural networks with one hidden layer, Journal of Approximation Theory, vol. 70, no. 2, pp. 131 141, 1992. [10] A. R. Barron, Universal approximation bounds for superpositions of a sigmoidal function, IEEE Transactions on Information Theory, vol. 39, no. 3, pp. 930 945, 1993. 19

[11] T. Poggio, L. Rosasco, A. Shashua, N. Cohen, and F. Anselmi, Notes on Hierarchical Splines, DCLNs and i-theory, Center for Brains, Minds and Machines CBMM), Tech. Rep., 2015. [12] O. Delalleau and Y. Bengio, Shallow vs. deep sum-product networks, in NIPS, 2011. [13] G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, On the number of linear regions of deep neural networks, in NIPS, 2014. [14] M. Telgarsky, Benefits of depth in neural networks, arxiv preprint arxiv:1602.04485, 2016. [15] R. Eldan and O. Shamir, The Power of Depth for Feedforward Neural Networks, arxiv preprint arxiv:1512.03965, 2015. [16] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang, Learning Polynomials with Neural Networks, in ICML, 2014. [17] A. Gil, J. Segura, and N. M. Temme, Numerical Methods for Special Functions. SIAM, 2007. [18] B. DasGupta and G. Schnitger, The Power of Approximating: a Comparison of Activation Functions, in NIPS, 1993. 20

APPENDIX A PROOFS A.1 Proof of Corollary 5 Proof. By Theorem 4, for each h i, i = 1,..., k, there exists a multilayer neural network h i such that h i x) hx) for any x [0, 1]. Let fx) = k β i hi x). i=1 Then the approximation error is upper bounded by fx) fx) k = β i h i x) i=1 k β i h i x) hx) =. Now we compute the size of the multilayer neural network f. Let N = log 2 and N x i be the binary expansion of x. Since h i x) has a form of j=0 i=1 N N ) x i h i x) = c ij g j, where g j x) = x j, then f should has a form of fx) = i=1 j=0 [ k N N )] x i β i c ij g j, 21

and can be further rewritten as [ N k ) N )] x i fx) = c ij β i g j j=0 i=1 N N c x i jg j j=0 ), where c j = i c ijβ i. Therefore, f can be implemented by a multilayer neural network shown in Figure 3.1 and this network has at most O log ) 1 layers, O log 1 ) binary log ) ) 1 2 step units and O rectifier linear units. A.2 Proof of Corollary 6 Proof. Since fx) = h 1 x)h 2 x)...h k x), then the derivative of f of order n is f n) = By the assumption that f n) α 1 +...+α k =n α 1 0,...,α k 0 α 1 +...+α k =n α 1 0,...,α k 0 h α i) i n! α 1!α 2!...α k! hα 1) 1 h α 2) 2...h α k) k. α i! holds for i = 1,..., k, then we have n! α 1!α 2!...α k! h α 1) 1 h α 2) 2...h α k) k ) n + k 1 n!. k 1 Then from Theorem 4, it follows that there exists a polynomial of P N degree N that R N = f P N f N+1) N + 1)!2 N 1 2 N ) N + k. k 1 22

Since ) N + k N + k) N+k k 1 k 1) k 1 N + 1) = N+1 ) k 1 en + k), k 1 ) k 1 N + k 1 + k 1 ) N+1 k 1 N + 1 then the error has an upper bound of R N en)k 2 N 2 2k+k log 2 N N. A.1) Since we need to bound R N 2, then we need to choose N such that Thus, N can be chosen such that N k log 2 N + 2k + log 2 2. N 2k log 2 N and N 4k + 2 log 2 2. Further, function lx) = x/ log 2 x is monotonically increasing on [e, ) and l4k log 2 4k) = 4k log 2 4k log 2 4k + log 2 log 2 4k Therefore, to satisfy the inequality A.1), one should choose N 4k log 2 4k + 4k + 2 log 2 2. 4k log 2 4k log 2 4k + log 2 4k = 2k. 23

Since N = 4k log 2 4k + 4k + 2 log 2 2 by assumptions, then there exists a polynomial P N of degree N such that f P N 2. Let N x i denote the binary expansion of x and let The approximation error is fx) = P N N fx) N ) fx) fx) f x N ) i + f N ) x i x i P N N f1) x x i + 2. Further, function f can be implemented by a multilayer neural network shown in Figure 3.1 and this network has at most ON) layers, ON) binary step units and ON 2 ) rectifier linear units. x i ). A.3 Proof of Corollary 7 Proof. We prove this theorem by induction. Define function F m = h 1... h m, 3 m = 1,..., k. Let T 1 m) log m, T 3 3 2m) log m 3 and T 3 m) ) 3 log m 2 3 denote the number of layers, the number of binary step units and the number of rectifier linear units required at most for -approximation of F m, respectively. By Theorem 4, for m = 1, there exists a multilayer neural network F 3 1 with at most T 1 1) log layers, 3 3 T 2 1) log binary step units and T 3 31) 3 2 log 3 ) rectifier linear units such that F 1 x) F 1 x), for x [0, 1]. 24

Now we consider the cases for 2 m k. We assume for F m 1, there exists a multilayer neural network F 3 m 1 with not more than T 1 m 1) log m 3 layers, T 2 m 3 1) log m 3 binary step units and T 3 m 1) ) 3 log m 2 3 rectifier linear units such that F m 1 x) F m 1 x), for x [0, 1]. 3 Further we assume the derivative of F m 1 has an upper bound F m 1 1. Then for F m, since F m x) can be rewritten as F m x) = F m 1 h m x)), and there exists a multilayer neural network h 3 m with at most T 1 1) log layers, 3 3 T 2 1) log binary step units and T 3 31) 3 2 log 3 ) rectifier linear units such that and h m x) h m x), for x [0, 1], 3 h m 1 + /3). Then for cascaded multilayer neural network Fm = F m 1 1 1+/3 h m ), we have ) F m F m = F m 1h m ) F hm m 1 1 + /3 ) hm F m 1h m ) F m 1 1 + /3 ) ) hm + F m 1 1 + /3 F hm m 1 1 + /3 F m 1 h m h m 1 + /3 + 3 F m 1 h m h m + F m 1 /3 m 1 + /3 h + 3 3 + 3 + 3 =. 25

In addition, the derivative of F m can be upper bounded by F m F m 1 h m = 1. Since the multilayer neural network F m is constructed by cascading multilayer neural networks F m 1 and h m, then the iterations for T 1, T 2 and T 3 are T 1 m) log 3 3 m =T 1m 1) log 3 3 m + T 11) log 3 3, A.2) 3 m T 2 m) log 3 =T 3 m 2m 1) log 3 + T 3 21) log 3, A.3) ) 3 m 2 ) 3 m 2 ) 2 3 T 3 m) log 3 =T 3 m 1) log 3 + T 3 1) log 3. A.4) From iterations A.2) and A.3), we could have for 2 m k, T 1 m) = T 1 m 1) + T 1 1) 1 + log 31/) m + log 3 1/) T 1m 1) + T 1 1) 1 + log 31/), m T 2 m) = T 2 m 1) + T 2 1) 1 + log 31/) m + log 3 1/) T 2m 1) + T 2 1) 1 + log 31/), m and thus T 1 k) = O log k log 1 ), T 2 k) = O log k log 1 ). From the iteration A.4), we have for 2 m k, ) 2 1 + log3 1/) T 3 m) = T 3 m 1) + T 3 1) T 3 m 1) + 1 + log 31/)) 3, m + log 3 1/) m 2 and thus T 3 k) = O log 1 ) ) 2. Therefore, to approximate f = F k, we need at most O k log k log 1 + log k log 1 ) ) 2 26

layers, O k log k log 1 + log k log 1 ) ) 2 binary step units and O k 2 log ) 1 2 ) ) + log 1 4 rectifier linear units. A.4 Proof of Theorem 8 Proof. The proof is composed of two parts. As before, we first use the deep structure shown in Figure 3.1 to find the binary expansion of x and next use a multilayer neural network to approximate the polynomial. Let x = x 1),..., x d) ) and w i = w i1,..., w id ). We could now use the deep structure shown in Figure 3.1 to find the binary expansion for each x k), k [d]. Let x k) = n x k) r r=0 denote the binary expansion of x k), where x k) 2 r r is the rth bit in the binary expansion of x k). Obviously, to decode all the n-bit binary expansions of all x k), k [d], we need a multilayer neural network with n layers and dn binary units in total. Besides, we let x = x 1),..., x d) ). Now we define fx) = f x) = p d ) w ik x k). i=1 k=1 We further define Since for l = 1,..., p 1, g l x) = l d ) w ik x k). i=1 k=1 g l x) = l d ) w ik x k) i=1 k=1 l w i 1 = 1, i=1 27

then we can rewrite g l+1 x), l = 1,..., p 1 into l+1 d ) g l+1 x) = w ik x k) = = = i=1 { d k=1 { d k=1 k=1 w l+1)k w l+1)k r=0 d [ wl+1)k x k) g l x) ] k=1 n [ x k) r gl x) ] } 2 r n [ max 2x r k) 1) + g ] } l x), 0. A.5) 2 r r=0 Obviously, equation A.5) defines a relationship between the outputs of neighbor layers and thus can be used to implement the multilayer neural network. In this implementation, we need dn rectifier linear units in each layer and thus dnp rectifier linear units. Therefore, to implement function fx), we need p + n layers, dn binary step units and dnp rectifier linear units in total. In the rest of proof, we consider the approximation error. Since for k = 1,..., d and x [0, 1] d, then [ fx) p x k) = w jk j=1 p i=1,i j w T i x )] p w jk p, fx) fx) = fx) f x) f 2 x x 2 pd 2 n. By choosing n = pd log 2, we have fx) f x). j=1 28

Since we use nd binary step units to convert the input to binary form and dnp neurons in function approximation, we thus use O ) d log pd binary step units and O ) pd log pd rectifier linear units in total. In addition, since we have used n layers to convert the input to binary form and p layers in the function approximation section of the network, the whole deep structure has O ) p + log pd layers. A.5 Proof of Theorem 9 Proof. For each multinomial function g with multi-index α, g α x) = x α, it follows ) from Theorem 4 that there exists a deep neural network g α of size O α log α d ) and depth O α + log α d such that g α x) g α x). Let the deep neural network be fx) = C α g α x), α: α p and thus fx) fx) C α g α x) g α x) =. α: α p Since the total number of multinomial is upper bounded by ) p + d 1 p, d 1 the size of deep neural network is thus upper bounded by ) p 2 p + d 1 log pd d 1. A.6) 29

If the dimension of the input d is fixed, then A.6) is has the order of ) p 2 p + d 1 log pd d 1 = O ep) d+1 log pd ), p, while if the degree p is fixed, then A.6) has the order of ) p 2 p + d 1 log pd d 1 = O p 2 ed) p log pd ), d. A.6 Proof of Theorem 11 Proof. We first prove the univariate case d = 1. The proof is composed of two parts. We say the function gx) has a break point at x = z if g is discontinuous at z or its derivative g is discontinuous at z. We first present the lower bound on the number of break points M) that the multilayer neural network f should have for -approximation of function f with error. We next relate the number of break points M) to the network depth L and the size N. Now we calculate the lower bound on M). We first define 4 points x 0, x 1 = x 0 + 2 ρ/µ, x 2 = x 1 + 2 ρ/µ and x 3 = x 2 + 2 ρ/µ, ρ > 1. We assume 0 x 0 < x 1 < x 2 < x 3 1. 30

We now prove that if multilayer neural network f has no break point in [x 1, x 2 ], then f should have a break point in [x 0, x 1 ] and a break point in [x 2, x 3 ]. We prove this by contradiction. We assume the neural network f has no break points in the interval [x 0, x 3 ]. Since f is constructed by rectifier linear units and binary step units and has no break points in the interval [x 0, x 3 ], then f should be a linear function in the interval [x 0, x 3 ], i.e., fx) = ax + b, x [x0, x 3 ] for some a and b. By assumption, since f approximates f with error at most everywhere in [0, 1], then fx 1 ) ax 1 b and fx 2 ) ax 2 b. Then we have By strong convexity of f, fx 2 ) fx 1 ) 2 x 2 x 1 a fx 2) fx 1 ) + 2 x 2 x 1. Besides, since ρ > 1 and fx 2 ) fx 1 ) x 2 x 1 + µ 2 x 2 x 1 ) f x 2 ). µ 2 x 2 x 1 ) = ρµ = 2ρ > x 2 x 1 2 x 2 x 1, then a f x 2 ). A.7) 31

Similarly, we can obtain a f x 1 ). By our assumption that f = ax + b, x [x 0, x 3 ], then fx 3 ) fx 3 ) = fx 3 ) ax 3 b = fx 3 ) fx 2 ) ax 3 x 2 ) + fx 2 ) ax 2 b f x 2 )x 3 x 2 ) + µ 2 x 3 x 2 ) 2 ax 3 x 2 ) = f x 2 ) a)x 3 x 2 ) + µ 2 2 ρ/µ) 2 2ρ 1) >. The first inequality follows from strong convexity of f and fx 2 ) ax 2 b. The second inequality follows from the inequality A.7). Therefore, this leads to the contradiction. Thus there exists a break point in the interval [x 2, x 3 ]. Similarly, we could prove there exists a break point in the interval [x 0, x 1 ], and this indicates that to achieve -approximation in [0, 1], the multilayer neural network f should have at least break points in [0, 1]. Therefore, 1 4 µ ρ M) 1 µ, ρ > 1. 4 ρ Further, [14] has shown that the maximum number of break points that a multilayer neural network of depth L and size N could have is N/L) L. Thus, L and N should satisfy Therefore, we have N/L) L > 1 µ, ρ > 1. 4 ρ µ ) 1 2L N L. 16 Besides, let m = N/L. Since each layer in network should have at least 2 neurons, i.e., m 2, then N m µ ) µ ) 2 log 2 m log 2 log 16 2. 16 32

Now we consider the multivariate case d > 1. Assume the input vector to be x = x 1,..., x d) ). We now fix x 2),..., x d) and define two univariate functions gy) = fy, x 2),..., x d) ), and gy) = fy, x 2),..., x d) ). By assumption, gy) is a strongly convex function with parameter µ and for all y [0, 1], gy) gy). Therefore, by results in the univariate case, we should have µ ) 1 2L N L 16 Now we have proved the theorem. µ ) and N log 2. A.8) 16 Remark: We make the following remarks about the lower bound in the theorem: 1) If the depth L) is fixed, as in shallow networks, the number of neurons required is Ω 1/) 1 2L. 2) If we are allowed to choose L optimally to minimize the lower bound, we will choose L = 1 log µ ) and thus the lower bound will become Ωlog 1 ), close to 2 16 the Olog 2 1 ) upper bound shown in Theorem 4. A.7 Proof of Corollary 12 Proof. From Theorem 4, it follows that there exists a deep neural network f d of depth L d = Θ log 1 ) and size N d c log 1 2, A.9) ) for some constant c > 0 such that f d f. 33

From equation A.8) in the proof of Theorem 11, it follows that for all shallow neural networks f s of depth L s and f s f, their sizes should satisfy µ ) 1 2Ls N s L s, 16 which is equivalent to log N s log L s + 1 µ ) log. A.10) 2L s 16 Substituting for log 1 ) from A.10) to A.9), we have N d = OL 2 s log 2 N s ). By definition, a shallow neural network has a small number of layers, i.e., L s. Thus, the size of the deep neural network is Olog 2 N s ). This means N d N s. A.8 Proof of Corollary 13 Corollary 13 Gaussian function). For Gaussian function fx) = fx 1),..., x d) ) = e d i=1 xi) ) 2 /2, x [0, 1] d, there exists a deep neural network fx) with O ) log d layers, O ) d log d binary step units and O d log d + ) ) log 1 2 rectifier linear units such that fx) fx) for x [0, 1] d. Proof. It follows from the Theorem 4 that there exist d multilayer neural networks g 1 x 1) ),..., g d x d) ) with O ) ) log ) d layers, O d log d binary step units and O d log d rectifier linear units in total such that x 1)2 +... + x d)2 2 g 1x 1) ) +... + g d x d) ) 2 2. A.11) 34

In addition, from Theorem 4, it follows that there exists a deep neural network ˆf with O ) log ) 1 layers, O log 1 log ) ) binary step units and O 1 2 such that e dx ˆfx), x [0, 1]. 2 Let x = g 1 x 1) ) +... + g d x d) ))/2d, then we have e d ) d i=1 g ix i)))/2 ˆf i=1 g ix i) ) 2 2. A.12) Let the deep neural network ) fx) = ˆf g1 x 1) ) +... + g d x d) ). 2 By inequalities A.11) and A.12), the approximation error is upper bounded by fx) fx) = e + d ) d i=1 xi) )/2 ˆf i=1 g ix i) ) 2 i=1 xi) )/2 e d i=1 g ix i) ))/2 d ) d i=1 g ix i)))/2 ˆf i=1 g ix i) ) 2 e d e 2 + 2 =. Now the deep neural network has O log ) d layers, O d log d ) binary step units and O d log d + ) ) log 1 2 rectifier linear units. 35

A.9 Proof of Corollary 14 Corollary 14 Ridge function). If fx) = ga T x) for some direction a R d with a 1 = 1, a 0, x [0, 1] d and some univariate function g satisfying conditions in Theorem 4, then there exists a multilayer neural network f with O log ) 1 layers, O log ) ) log ) 1 binary step units and O 1 2 rectifier linear units such that fx) fx) for x [0, 1] d. Proof. Let t = a T x. Since a 1 = 1, a 0 and x [0, 1] d, then 0 t 1. Then from Theorem 4, it follows that there exists a multilayer neural network g with O ) log ) 1 layers, O log 1 log ) ) binary step units and O 1 2 rectifier linear units such that If we define the deep network f as gt) gt), t [0, 1]. fx) = gt), then the approximation error of f is fx) fx) = gt) gt). Now we have proved the corollary. 36