Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

Size: px
Start display at page:

Download "Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks"

Transcription

1 Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks Itay Safran 1 Ohad Shamir 1 Abstract We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L 1 norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball. 1. Introduction Deep learning, in the form of artificial neural networks, has seen a dramatic resurgence in the past recent years, achieving great performance improvements in various fields of artificial intelligence such as computer vision and speech recognition. While empirically successful, our theoretical understanding of deep learning is still limited at best. An emerging line of recent works has studied the expressive power of neural networks: What functions can and cannot be represented by networks of a given architecture (see related work section below). A particular focus has been the trade-off between the network s width and depth: On the one hand, it is well-known that large enough networks of depth 2 can already approximate any continuous target function on [0, 1] d to arbitrary accuracy (Cybenko, 1989; Hornik, 1991). On the other hand, it has long been evident that deeper networks tend to perform better than shallow ones, a phenomenon supported by the intuition that depth, providing compositional expressibility, is necessary for efficiently representing some functions. Moreover, re- 1 Weizmann Institute of Science, Rehovot, Israel. Correspondence to: Itay Safran <itay.safran@weizmann.ac.il>, Ohad Shamir <ohad.shamir@weizmann.ac.il>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, Copyright 2017 by the author(s). cent empirical evidence suggests that standard feedforward deep networks are harder to optimize than shallower networks which lead to worse training error and testing error (He et al., 2015). To demonstrate the power of depth in neural networks, a clean and precise approach is to prove the existence of functions which can be expressed (or well-approximated) by moderately-sized networks of a given depth, yet cannot be approximated well by shallower networks, even if their size is much larger. However, the mere existence of such functions is not enough: Ideally, we would like to show such depth separation results using natural, interpretable functions, of the type we may expect neural networks to successfully train on. Proving that depth is necessary for such functions can give us a clearer and more useful insight into what various neural network architectures can and cannot express in practice. In this paper, we provide several contributions to this emerging line of work. We focus on standard, vanilla feedforward networks (using some fixed activation function, such as the popular ReLU), and measure expressiveness directly in terms of approximation error, defined as the expected squared loss with respect to some distribution over the input domain. In this setting, we show the following: We prove that the indicator of the Euclidean unit ball, x 1 ( x 1) in R d, which can be easily approximated to accuracy ɛ using a 3-layer network with O(d 2 /ɛ) neurons, cannot be approximated to an accuracy higher than O(1/d 4 ) using a 2-layer network, unless its width is exponential in d. In fact, we show the same result more generally, for any indicator of an ellipsoid x 1 ( Ax + b r) (where A is a non-singular matrix and b is a vector). The proof is based on a reduction from the main result of (Eldan & Shamir, 2016), which shows a separation between 2- layer and 3-layer networks using a more complicated and less natural radial function. We prove that any L 1 radial function x f( x 1 ), where x R d and f : R R is piecewise-linear, cannot be approximated to accuracy ɛ by a depth 2 ReLU network of width less than

2 Ω(min{1/ɛ, exp(ω(d))}). In contrast, such functions can be represented exactly by 3-layer ReLU networks. We show that this depth/width trade-off can also be observed experimentally: Specifically, that when using standard backpropagation to learn the indicators of the L 1 and L 2 unit balls, 3-layer nets give significantly better performance compared to 2-layer nets (even if much larger). Our theoretical results indicate that this gap in performance is due to approximation error issues. This experiment also highlights the fact that our separation result is for a natural function that is not just well-approximated by some 3-layer network, but can also be learned well from data using standard methods. Finally, we prove that any member of a wide family of non-linear and twice-differentiable functions (including for instance x x 2 in [0, 1]), which can be approximated to accuracy ɛ using ReLU networks of depth and width O(poly(log(1/ɛ))), cannot be approximated to similar accuracy by constantdepth ReLU networks, unless their width is at least Ω(poly(1/ɛ)). We note that a similar result appeared online concurrently and independently of ours in (Yarotsky, 2016; Liang & Srikant, 2016), but the setting is a bit different (see related work below for more details). RELATED WORK The question of studying the effect of depth in neural network has received considerable attention recently, and studied under various settings. Many of these works consider a somewhat different setting than ours, and hence are not directly comparable. These include networks which are not plain-vanilla ones (e.g. (Cohen et al., 2016; Delalleau & Bengio, 2011; Martens & Medabalimi, 2014)), measuring quantities other than approximation error (e.g. (Bianchini & Scarselli, 2014; Poole et al., 2016)), focusing only on approximation upper bounds (e.g. (Shaham et al., 2016)), or measuring approximation error in terms of L -type bounds, i.e. sup x f(x) f(x)) rather than L 2 -type bounds E x (f(x) f(x)) 2 (e.g. (Yarotsky, 2016; Liang & Srikant, 2016)). We note that the latter distinction is important: Although L bounds are more common in the approximation theory literature, L 2 bounds are more natural in the context of statistical machine learning problems (where we care about the expected loss over some distribution). Moreover, L 2 approximation lower bounds are stronger, in the sense that an L 2 lower bound easily translates to a lower bound on L lower bound, but not vice versa 1. 1 To give a trivial example, ReLU networks always express continuous functions, and therefore can never approximate a dis- A noteworthy paper in the same setting as ours is (Telgarsky, 2016), which proves a separation result between the expressivity of ReLU networks of depth k and depth o (k/ log (k)) (for any k). This holds even for onedimensional functions, where a depth k network is shown to realize a saw-tooth function with exp(o(k)) oscillations, whereas any network of depth o (k/ log (k)) would require a width super-polynomial in k to approximate it by more than a constant. In fact, we ourselves rely on this construction in the proofs of our results in section 5. On the flip side, in our paper we focus on separation in terms of the accuracy or dimension, rather than a parameter k. Moreover, the construction there relies on a highly oscillatory function, with Lipschitz constant exponential in k almost everywhere. In contrast, in our paper we focus on simpler functions, of the type that are likely to be learnable from data using standard methods. Our separation results in Sec. 5 (for smooth non-linear functions) are closely related to those of (Yarotsky, 2016; Liang & Srikant, 2016), which appeared online concurrently and independently of our work, and the proof ideas are quite similar. However, these papers focused on L bounds rather than L 2 bounds. Moreover, (Yarotsky, 2016) considers a class of functions different than ours in their positive results, and (Liang & Srikant, 2016) consider networks employing a mix of ReLU and threshold activations, whereas we consider a purely ReLU network. Another relevant and insightful work is (Poggio et al., 2016), which considers width vs. depth and provide general results on expressibility of functions with a compositional nature. However, the focus there is on worse-case approximation over general classes of functions, rather than separation results in terms of specific functions as we do here, and the details and setting is somewhat orthogonal to ours. 2. Preliminaries In general, we let bold-faced letters such as x = (x 1,..., x d ) denote vectors, and capital letters denote matrices or probabilistic events. denotes the Euclidean norm, and 1 the 1-norm. 1 ( ) denotes the indicator function. We use the standard asymptotic notation O( ) and Ω( ) to hide constants, and Õ( ) and Ω( ) to hide constants and factors logarithmic in the problem parameters. Neural Networks. We consider feed-forward neural networks, computing functions from R d to R. The network is composed of layers of neurons, where each neuron computes a function of the form x σ(w x + b), where w continuous function such as x 1 (x 0) in an L sense, yet can easily approximate it in an L 2 sense given any continuous distribution.

3 is a weight vector, b is a bias term and σ : R R is a non-linear activation function, such as the ReLU function σ(z) = [z] + = max{0, z}. Letting σ(w x + b) be a shorthand for ( σ(w1 x + b 1 ),..., σ(wn x + b n ) ), we define a layer of n neurons as x σ(w x + b). By denoting the output of the i th layer as O i, we can define a network of arbitrary depth recursively by O i+1 = σ(w i+1 O i + b i+1 ), where W i, b i represent the matrix of weights and bias of the i th layer, respectively. Following a standard convention for multi-layer networks, the final layer h is a purely linear function with no bias, i.e. O h = W h O h 1. We define the depth of the network as the number of layers l, and denote the number of neurons n i in the i th layer as the size of the layer. We define the width of a network as max i {1,...,l} n i. Finally, a ReLU network is a neural network where all the non-linear activations are the ReLU function. We use 2- layer and 3-layer to denote networks of depth 2 and 3. In particular, in our notation a 2-layer ReLU network has the form n 1 x v i [wi x + b i ] + for some parameters v 1, b 1,..., v n1, b n1 and d- dimensional vectors w 1,..., w n1. Similarly, a 3-layer ReLU network has the form n 2 n 1 [ u i v i,j w i,j x + b i,j ]+ + c i j=1 for some parameters {u i, v i,j, b i,j, c i, w i,j }. Approximation error. Given some function f on a domain X endowed with some probability distribution (with density function µ), we define the quality of its approximation by some other function f as X (f(x) f(x)) 2 µ(x)dx = E x µ [(f(x) f(x)) 2 ]. We refer to this as approximation in the L 2 -norm sense. In one of our results (Thm. 6), we also consider approximation in the L -norm sense, defined as sup x X f(x) f(x). Clearly, this upper-bounds the (square root of the) L 2 approximation error defined above, so as discussed in the introduction, lower bounds on the L 2 approximation error (w.r.t. any distribution) are stronger than lower bounds on the L approximation error. 3. Indicators of L 2 Balls and Ellipsoids We begin by considering one of the simplest possible function classes on R d, namely indicators of L 2 balls (and more generally, ellipsoids). The ability to compute such functions is necessary for many useful primitives, for example determining if the distance between two points in Euclidean space is below or above some threshold (either with respect to the Euclidean distance, or a more general Mahalanobis distance). In this section, we show a depth separation result for such functions: Although they can be easily + approximated with 3-layer networks, no 2-layer network can approximate it to high accuracy w.r.t. any distribution, unless its width is exponential in the dimension. This is formally stated in the following theorem: Theorem 1 (Inapproximability with 2-layer networks). The following holds for some positive universal constants c 1, c 2, c 3, c 4, and any network employing an activation function satisfying Assumptions 1 and 2 in Eldan & Shamir (2016): For any d > c 1, and any non-singular matrix A R d d, b R d and r (0, ), there exists a continuous probability distribution γ on R d, such that for any function g computed by a 2-layer network of width at most c 3 exp(c 4 d), and for the function f(x) = 1 ( Ax + b r), we have (f(x) g(x)) 2 γ(x)dx c 2 R d 4. d We note that the assumptions from (Eldan & Shamir, 2016) are very mild, and apply to all standard activation functions, including ReLU, sigmoid and threshold. For completeness, the fully stated assumptions are presented in Subsection A.1 The formal proof of Thm. 1 (provided below) is based on a reduction from the main result of (Eldan & Shamir, 2016), which shows the existence of a certain radial function (depending on the input x only through its norm) and a probability distribution which cannot be expressed by a 2-layer network, whose width is less than exponential in the dimension d to more than constant accuracy. A closer look at the proof reveals that this function (denoted as g) can be expressed as a sum of Θ(d 2 ) indicators of L 2 balls of various radii. We argue that if we could have accurately approximated a given L 2 ball indicator with respect to all distributions, then we could have approximated all the indicators whose sum add up to g, and hence reach a contradiction. By a linear transformation argument, we show the same contradiction would have occured if we could have approximated the indicators of an non-degenerate ellipse with respect to any distribution. The formal proof is provided below: Proof of Thm. 1. Assume by contradiction that for f as described in the theorem, and for any distribution γ, there exists a 2-layer network f γ of width at most c 3 exp(c 4 d), such that ( f(x) f ) 2 c 2 γ (x) γ(x)dx ɛ d 4. x R d Let  and ˆb be a d d non-singular matrix and vector respectively, to be determined later. We begin by performing a change of variables, y = Âx+ˆb x =  1 (y ˆb),

4 dx = y R d Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks ( 1 ) det dy, which yields we get by Eq. (4) and the triangle inequality that ( ( 1 ( f y ˆb )) f ( 1 ( γ y ˆb ))) 2 ( 1 ( γ y ˆb )) ( 1 ) det dy ɛ. (1) In particular, let us choose the distribution γ defined as γ(z) = det(â) µ(âz + ˆb), where µ is the (continuous) distribution used in the main result of (Eldan & Shamir, 2016) (note that γ is indeed a distribution, since z γ (z) = (det(â)) z µ(âz+ˆb)dz, which by the change of variables x = Âz + ˆb, dx = det(â) dz equals µ(x)dx = 1). x Plugging the definition of γ in Eq. (1), and using the fact that det(â 1 ) det(â) = 1, we get ( ( 1 ( f y ˆb )) f ( 1 ( γ y ˆb ))) 2 y R d µ (y) dy ɛ. (2) Letting z > 0 be an arbitrary parameter, we now pick  = z r A and ˆb = z r b. Recalling the definition of f as x 1 ( Ax + b r), we get that (1 ( y z) f ( r ( γ z A 1 y z ))) 2 r b y R d µ (y) dy ɛ. (3) Note that fγ ( r z A 1 ( y z r b)) expresses a 2-layer network composed with a linear transformation of the input, and hence can be expressed in turn by a 2-layer network (as we can absorb the linear transformation into the parameters of each neuron in the first layer). Therefore, letting f L2(µ) = y f 2 (y)dy denote the norm in L 2 (µ) function space, we showed the following: For any z > 0, there exists a 2-layer network f z such that ( 1 ( z) f z ( ) ɛ. (4) ) L2(µ) With this key result in hand, we now turn to complete the proof. We consider the function g from (Eldan & Shamir, 2016), for which it was proven that no 2-layer network can approximate it w.r.t. µ to better than constant accuracy, unless its width is exponential in the dimension d. In particular g can be written as n g(x) = ɛ i 1 ( x [a i, b i ]), where [a i, b i ] are disjoint intervals, ɛ i { 1, +1}, and n = Θ(d 2 ) where d is the dimension. Since g can also be written as n ɛ i (1 ( x b i ) 1 ( x a i )), n g( ) ɛ i ( f bi ( ) f ai ( ) L2(µ) n ( ( ɛ i 1 ( b i ) f ) bi L2(µ) + 1 ( a i ) f ai ( ) 2n ɛ. L2(µ) However, since a linear combination of 2n 2-layer neural networks of width at most w is still a 2-layer network, of width at most 2nw, we get that n ɛ i ( f bi ( ) f ai ( )) is a 2-layer network, of width at most Θ(d 2 ) c 3 exp(c 4 d), which approximates g to an accuracy of less than 2n ɛ = Θ(d ) c 2 2 /d 4 = Θ(1) c 2. Hence, by picking c 2, c 3, c 4 sufficiently small, we get a contradiction to the result of (Eldan & Shamir, 2016), that no 2-layer network of width smaller than c exp(cd) (for some constant c) can approximate g to more than constant accuracy, for a sufficiently large dimension d. To complement Thm. 1, we also show that such indicator functions can be easily approximated with 3-layer networks. The argument is quite simple: Using an activation such as ReLU or Sigmoid, we can use one layer to approximate any Lipschitz continuous function on any bounded interval, and in particular x x 2. Given a vector x R d, we can apply this construction on each coordinate x i seperately, hence approximating x x 2 = d x2 i. Similarly, we can approximate x Ax + b for arbitrary fixed matrices A and vectors b. Finally, with a 3-layer network, we can use the second layer to compute a continuous approximation to the threshold function z 1 (z r). Composing these two layers, we get an arbitrarily good approximation to the function x 1 ( Ax + b r) w.r.t. any continuous distribution, with the network size scaling polynomially with the dimension d and the required accuracy. In the theorem below, we formalize this intuition, where for simplicity we focus on approximating the indicator of the unit ball: Theorem 2 (Approximability with 3-layer networks). Given δ > 0, for any activation function σ satisfying Assumption 1 in Eldan & Shamir (2016) and any continuous probability distribution µ on R d, there exists a constant c σ dependent only on σ, and a function g expressible by a 3- } layer network of width at most max {8c σ d 2 /δ, c σ 1/2δ, such that the following holds: (g (x) 1 ( x 2 1)) 2 µ (x) dx δ, R d where c σ is a constant depending solely on σ. )

5 Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks The proof of the theorem appears in the supplementary material 3.1. An Experiment In this subsection, we empirically demonstrate that indicator functions of L 2 balls are indeed easier to learn with a 3-layer network, compared to a 2-layer network (even if the 2-layer network is significantly larger). This indicates that the depth/width trade-off for indicators of balls, predicted by our theory, can indeed be observed experimentally. Moreover, it highlights the fact that our separation result is for simple natural functions, that can be learned reasonably well from data using standard methods. For our experiment, we sampled data instances in R 100, with a direction chosen uniformly at random and a norm drawn uniformly at random from the interval [0, 2]. To each instance, we associated a target value computed according to the target function f(x) = 1 ( x 2 1). Another examples were generated in a similar manner and used as a validation set. We trained 5 ReLU networks on this dataset: One 3-layer network, with a first hidden layer of size 100, a second hidden layer of size 20, and a linear output neuron. Four 2-layer networks, with hidden layer of sizes 100, 200, 400 and 800, and a linear output neuron. Training was performed with backpropagation, using the TensorFlow library. We used the squared loss l(y, y ) = (y y ) 2 and batches of size 100. For all networks, we chose a momentum parameter of 0.95, and a learning rate starting at 0.1, decaying by a multiplicative factor of 0.95 every 1000 batches, and stopping at The results are presented in Fig. 1. As can be clearly seen, the 3-layer network achieves significantly better performance than the 2-layer networks. This is true even though some of these networks are significantly larger and with more parameters (for example, the 2-layer, width 800 network has 80K parameters, vs. 10K parameters for the 3- layer network). This gap in performance is the exact opposite of what might be expected based on parameter counting alone. Moreover, increasing the width of the 2-layer networks exhibits diminishing returns: The performance improvement in doubling the width from 100 to 200 is much larger than doubling the width from 200 to 400 or 400 to 800. This indicates that one would need a much larger 2- layer network to match the 3-layer, width 100 network s performance. Thus, we conclude that the network s depth indeed plays a crucial role, and that 3-layer networks are inherently more suitable to express indicator functions of the type we studied. RMSE (training set) RMSE (validation set) Batch number (x1000) Batch number (x1000) 3-layer, width layer, width layer, width layer, width layer, width layer, width layer, width layer, width layer, width layer, width 800 Figure 1. The experiment results, depicting the network s root mean square error over the training set (top) and validation set (bottom), as a function of the number of batches processed. Best viewed in color. 4. L 1 Radial Functions; ReLU Networks Having considered functions depending on the L 2 norm, we now turn to consider functions depending on the L 1 norm. Focusing on ReLU networks, we will show a certain separation result holding for any non-linear function, which depends on the input x only via its 1-norm x 1. Theorem 3. Let f : [0, ) R be a function such that for some r, δ > 0 and ɛ (0, 1/2), inf E x uniform on [r,(1+ɛ)r][(f(x) (ax b)) 2 ] > δ. a,b R Then there exists a distribution γ over {x : x 1 (1 + ɛ)r}, such that if a 2-layer ReLU network F (x) satisfies (f( x 1 ) F (x)) 2 γ(x)dx δ/2, x then its width must be at least Ω(min {1/ɛ, exp(ω(d))}) (where the Ω notation hides constants and factors logarithmic in ɛ, d). The proof appears in the supplementary material. We note that δ controls how linearly inapproximable is f in a narrow interval (of width ɛ) around r, and that δ is generally dependent on ɛ. To give a concrete example, suppose that f(z) = [z 1] +, which cannot be approximated by a linear function to an accuracy better than O(ɛ 2 ) in an ɛ- neighborhood of 1. By taking r = 1 ɛ 2 and δ = O(ɛ2 ), we get that no 2-layer network can approximate the function

6 [ x 1 1] + (at least with respect to some distribution), unless its width is Ω(min {1/ɛ, exp(ω(d))}). On the flip side, f( x 1 ) can be expressed exactly by a 3-layer, width 2d ReLU network: x [ d ([x i] + +[ x i ] + ) 1] +, where the output neuron is simply the identity function. The same argument would work for any piecewise-linear f. More generally, the same kind of argument would work for any function f exhibiting a non-linear behavior at some points: Such functions can be well-approximated by 3-layer networks (by approximating f with a piecewise-linear function), yet any approximating 2-layer network will have a lower bound on its size as specified in the theorem. Intuitively, the proof relies on showing that any good 2- layer approximation of f( x 1 ) must capture the nonlinear behavior of f close to most points x satisfying x 1 r. However, a 2-layer ReLU network x N j=1 a j [ w j, x + b j ] + is piecewise linear, with nonlinearities only at the union of the N hyperplanes j {x : w j, x + b j = 0}. This implies that most points x s.t. x 1 r must be ɛ-close to a hyperplane {x : w j, x + b j = 0}. However, the geometry of the L 1 ball {x : x = r} is such that the ɛ neighborhood of any single hyperplane can only cover a small portion of that ball, yet we need to cover most of the L 1 ball. Using this and an appropriate construction, we show that required number of hyperplanes is at least 1/ɛ, as long as ɛ > exp( O(d)) (and if ɛ is smaller than that, we can simply use one neuron/hyperplane for each of the 2 d facets of the L 1 ball, and get a covering using 2 d neurons/hyperplanes). The formal proof appears in the supplementary material. We note that the bound in Thm. 3 is of a weaker nature than the bound in the previous section, in that the lower bound is only polynomial rather than exponential (albeit w.r.t. different problem parameters: ɛ vs. d). Nevertheless, we believe this does point out that L 1 balls also pose a geometric difficulty for 2-layer networks, and conjecture that our lower bound can be considerably improved: Indeed, at the moment we do not know how to approximate a function such as x [ x 1 1] + with 2-layer networks to better than constant accuracy, using less than Ω(2 d ) neurons. Finally, we performed an experiment similar to the one presented in Subsection 3.1, where we verified that the bounds we derived are indeed reflected in differences in empirical performance, when training 2-layer nets versus 3-layer nets. The reader is referred to Sec. B for the full details of the experiment and its results. 5. C 2 Nonlinear Functions; ReLU Networks In this section, we establish a depth separation result for approximating continuously twice-differentiable (C 2 ) functions using ReLU neural networks. Unlike the previous results in this paper, the separation is for depths which can be larger than 3, depending on the required approximation error. Also, the results will all be with respect to the uniform distribution µ d over [0, 1] d. As mentioned earlier, the results and techniques in this section are closely related to the independent results of (Yarotsky, 2016; Liang & Srikant, 2016), but our emphasis is on L 2 rather than L approximation bounds, and we focus on somewhat different network architectures and function classes. Clearly, not all C 2 functions are difficult to approximate (e.g. a linear function can be expressed exactly with a 2- layer network). Instead, we consider functions which have a certain degree of non-linearity, in the sense that its Hessians are non-zero along some direction, on a significant portion of the domain. Formally, we make the following definition: Definition 1. Let µ d denote the uniform distribution on [0, 1] d. For a function f : [0, 1] d R and some λ > 0, denote σ λ (f) = sup µ d (U), v S d 1, U U s.t. v H(f)(x)v λ x U where S d 1 = {x : x 2 = 1} is the d-dimensional unit hypersphere, and U is the set of all connected and measurable subsets of [0, 1] d. In words, σ λ (f) is the measure (w.r.t. the uniform distribution on [0, 1] d ) of the largest connected set in the domain of f, where at any point, f has curvature at least λ along some fixed direction v. The prototypical functions f we are interested in is when σ λ (f) is lower bounded by a constant (e.g. it is 1 if f is strongly convex). We stress that our results in this section will hold equally well by considering the condition v H(f)(x)v λ as well, however for the sake of simplicity we focus on the former condition appearing in Def. 1. Our goal is to show a depth separation result inidividually for any such function (that is, for any such function, there is a gap in the attainable error between deeper and shallower networks, even if the shallow network is considerably larger). As usual, we start with an inapproximability result. Specifically, we prove the following lower bound on the attainable approximation error of f, using a ReLU neural network of a given depth and width: Theorem 4. For any C 2 function f : [0, 1] d R, any λ > 0, and any function g on [0, 1] d expressible by a ReLU network of depth l and maximal width m, it holds that [0,1] d (f(x) g(x) 2 µ d (x) dx c λ2 σ 5 λ (2m) 4l, where c > 0 is a universal constant.

7 The theorem conveys a key tradeoff between depth and width when approximating a C 2 function using ReLU networks: The error cannot decay faster than polynomially in the width m, yet the bound deteriorates exponentially in the depth l. As we show later on, this deterioration does not stem from the looseness in the bound: For well-behaved f, it is indeed possible to construct ReLU networks, where the approximation error decays exponentially with depth. The proof of Thm. 4 appears in the supplementary material, and is based on a series of intermediate results. First, we show that any strictly curved function (in a sense similar to Definition 1) cannot be well-approximated in an L 2 sense by piecewise linear functions, unless the number of linear regions is large. To that end, we first establish some necessary tools based on Legendre polynomials. We then prove a result specific to the one-dimensional case, including an explicit lower bound if the target function is quadratic (Thm. 9) or strongly convex or concave (Thm. 10). We then expand the construction to get an error lower bound in general dimension d, depending on the number of linear regions in the approximating piecewiselinear function. Finally, we note that any ReLU network induces a piecewise-linear function, and bound the number of linear regions induced by a ReLU network of a given width and depth (using a lemma borrowed from (Telgarsky, 2016)). Combining this with the previous lower bound yields Thm. 4. We now turn to complement this lower bound with an approximability result, showing that with more depth, a wide family of functions to which Thm. 4 applies can be approximated with exponentially high accuracy. Specifically, we consider functions which can be approximated using a moderate number of multiplications and additions, where the values of intermediate computations are bounded (for example, a special case is any function approximable by a moderately-sized Boolean circuit, or a polynomial). The key result to show this is the following, which implies that the multiplication of two (bounded-size) numbers can be approximated by a ReLU network, with error decaying exponentially with depth: Theorem 5. Let f : [ M, M] 2 R, f (x, y) = x y and let ɛ > 0 be arbitrary. Then exists a ReLU neural network g of width 4 log ( ) ( M ɛ + 13 and depth 2 log M ) ɛ + 9 satisfying sup f (x, y) g (x, y) ɛ. (x,y) [ M,M] 2 The idea of the construction is that depth allows us to compute highly-oscillating functions, which can extract highorder bits from the binary representation of the inputs. Given these bits, one can compute the product by a procedure resembling long multiplication, as shown in Fig. 2, Figure 2. ReLU approximation of the function x x 2 obtained by extracting 5 bits. The number of linear segments grows exponentially with the number of bits and the approximating network size. and formally proven as follows: Proof of Thm. 5. We begin by observing that by using a simple linear change of variables on x, we may assume without loss of generality that x [0, 1], as we can just rescale x to the interval [0, 1], and then map it back to its original domain [ M, M], where the error will multiply by a factor of 2M. Then by requiring accuracy ɛ, the result will follow. ɛ 2M instead of The key behind the proof is that performing bit-wise operations on the first k bits of x [0, 1] yields an estimation of the product to accuracy 2 1 k M. Let x = 2 i x i be the binary representation of x where x i is the i th bit of x, then x y = = 2 i x i y k 2 i x i y + But since 2 i x i y i=k+1 i=k+1 i=k+1 2 i x i y. (5) 2 i y = 2 k y 2 1 k M, Eq. (5) implies k x y 2 i x i y 21 k M. Requiring that 2 2 k M ɛ 2M, it suffices to show the existence of a network which approximates the function k 2 i x i y to accuracy ɛ 2, where k = 2 log ( ) 8M ɛ.

8 This way both approximations will be at most ɛ 2, resulting in the desired accuracy of ɛ. Before specifying the architecture which extracts the i th bit of x, we first describe the last 2 layers of the network. Let the penultimate layer comprise of k neurons, each receiving ( both y and x i as input, and having the set of weights 2 i, 1, 1 ). Thus, the output of the i th neuron in the penultimate layer is [ 2 i y + x i 1 ] + = 2 i x i y. Let the final single output neuron have the set of weights (1,..., 1, 0) R k+1, this way, the output of the network will be k 2 i x i y as required. We now specify the architecture which extracts the first most significant k bits of x. In Telgarsky (2016), the author demonstrates how the composition of the function ϕ (x) = [2x] + [4x 2] + with itself i times, ϕ i, yields a highly oscillatory triangle wave function in the domain [0, 1]. Furthermore, we observe that ϕ (x) = 0 x 0, and thus ϕ i (x) = 0 x 0. Now, a linear shift of the input of ϕ i by 2 i 1, and composing the output with σ δ (x) = [ 1 2δ x 1 4δ ] + [ 1 2δ x 1 4δ 1 ], 2 + which converges to 1 [x 0.5] (x) ( as δ 0, results in an approximation of x x i : σ ( δ ϕ i x 2 i 1)). We stress that choosing δ such that the network approximates the bitwise product to accuracy ɛ 2 will require δ to be of magnitude 1 ɛ, but this poses no problem as representing such a number requires log ( ) 1 ɛ bits, which is also the magnitude of the size of the network, as suggested by the following analysis. Next, we compute the size of the network required to implement the above approximation. To compute ϕ only two neurons are required, therefore ϕ i can be computed using i layers with 2 neurons in each, and finally composing this with σ δ requires a subsequent layer with 2 more neurons. To implement the i th bit extractor we therefore require a network of size 2 (i + 1). Using dummy neurons to propagate the i th bit for i < k, the architecture extracting the k most significant bits of x will be of size 2k (k + 1). Adding the final component performing the multiplication estimation will require 2 more layers of width k and 1 respectively, and an increase of the width by 1 to propagate y to the penultimate layer, resulting in a network of size (2k + 1) (k + 1). Thm. 5 shows that multiplication can be performed very accurately by deep networks. Moreover, additions can be computed by ReLU networks exactly, using only a single layer with 4 neurons: Let α, β R be arbitrary, then (x, y) α x + β y is given in terms of ReLU summation by α [x] + α [ x] + + β [y] + β [ y] +. Repeating these arguments, we see that any function which can be approximated by a bounded number of operations involving additions and multiplications, can also be approximated well by moderately-sized networks. This is formalized in the following theorem, which provides an approximation error upper bound (in the L sense, which is stronger than L 2 for upper bounds): Theorem 6. Let F t,m,ɛ be the family of functions on the domain [0, 1] d with the property that f F t,m,ɛ is approximable to accuracy ɛ with respect to the infinity norm, using at most t operations involving weighted addition, (x, y) α x + β y, where α, β R are fixed; and multiplication, (x, y) x y, where each intermediate computation stage is bounded in the interval [ M, M]. Then there exists a universal constant c, and a ReLU network g of width and depth at most c ( t log ( ) 1 ɛ + t 2 log (M) ), such that sup f (x) g (x) 2ɛ. x [0,1] d As discussed in Sec. 2, this type of L approximation bound implies an L 2 approximation bound with respect to any distribution. The proof of the theorem appears in Sec. A. Combining Thm. 4 and Thm. 6, we can state the following corollary, which formally shows how depth can be exponentially more valuable than width as a function of the target accuracy ɛ: Corollary 1. Suppose f C 2 F t(ɛ),m(ɛ),ɛ, where t (ɛ) = O (poly (log (1/ɛ))) and M (ɛ) = O (poly (1/ɛ)). Then approximating f to accuracy ɛ in the L 2 norm using a fixed depth ReLU network requires width at least poly(1/ɛ), whereas there exists a ReLU network of depth and width at most p (log (1/ɛ)) which approximates f to accuracy ɛ in the infinity norm, where p is a polynomial depending solely on f. Proof. The lower bound follows immediately from Thm. 4. For the upper bound, observe that Thm. 6 implies an ɛ approximation by a network of width and depth at most ( ) c t (ɛ/2) log (2/ɛ) + (t (ɛ/2)) 2 log (M (ɛ/2)), which by the assumption of Corollary 1, can be bounded by p (log (1/ɛ)) for some polynomial p which depends solely on f.

9 Acknowledgements This research is supported in part by an FP7 Marie Curie CIG grant, Israel Science Foundation grant 425/13, and the Intel ICRI-CI Institute. We would like to thank Shai Shalev-Shwartz for some illuminating discussions, and Eran Amar for his valuable help with the experiments. References Bianchini, M. and Scarselli, F. On the complexity of shallow and deep neural network classifiers. In ESANN, Shaham, Uri, Cloninger, Alexander, and Coifman, Ronald R. Provable approximation properties for deep neural networks. Applied and Computational Harmonic Analysis, Telgarsky, Matus. Benefits of depth in neural networks. arxiv preprint arxiv: , Yarotsky, Dmitry. Error bounds for approximations with deep relu networks. arxiv preprint arxiv: , Cohen, Nadav, Sharir, Or, and Shashua, Amnon. On the expressive power of deep learning: A tensor analysis. In 29th Annual Conference on Learning Theory, pp , Cybenko, George. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4): , Delalleau, O. and Bengio, Y. Shallow vs. deep sum-product networks. In NIPS, pp , Eldan, Ronen and Shamir, Ohad. The power of depth for feedforward neural networks. In 29th Annual Conference on Learning Theory, pp , He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arxiv preprint arxiv: , Hornik, Kurt. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2): , Liang, Shiyu and Srikant, R. Why deep neural networks? arxiv preprint arxiv: , Martens, J. and Medabalimi, V. On the expressive efficiency of sum product networks. arxiv preprint arxiv: , Poggio, Tomaso, Mhaskar, Hrushikesh, Rosasco, Lorenzo, Miranda, Brando, and Liao, Qianli. Why and when can deep but not shallow networks avoid the curse of dimensionality: a review. arxiv preprint arxiv: , Poole, Ben, Lahiri, Subhaneil, Raghu, Maithreyi, Sohl- Dickstein, Jascha, and Ganguli, Surya. Exponential expressivity in deep neural networks through transient chaos. In Advances In Neural Information Processing Systems, pp , 2016.

Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks Itay Safran Weizmann Institute of Science itay.safran@weizmann.ac.il Ohad Shamir Weizmann Institute of Science ohad.shamir@weizmann.ac.il

More information

arxiv: v1 [cs.lg] 30 Sep 2018

arxiv: v1 [cs.lg] 30 Sep 2018 Deep, Skinny Neural Networks are not Universal Approximators arxiv:1810.00393v1 [cs.lg] 30 Sep 2018 Jesse Johnson Sanofi jejo.math@gmail.com October 2, 2018 Abstract In order to choose a neural network

More information

WHY DEEP NEURAL NETWORKS FOR FUNCTION APPROXIMATION SHIYU LIANG THESIS

WHY DEEP NEURAL NETWORKS FOR FUNCTION APPROXIMATION SHIYU LIANG THESIS c 2017 Shiyu Liang WHY DEEP NEURAL NETWORKS FOR FUNCTION APPROXIMATION BY SHIYU LIANG THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer

More information

Nearly-tight VC-dimension bounds for piecewise linear neural networks

Nearly-tight VC-dimension bounds for piecewise linear neural networks Proceedings of Machine Learning Research vol 65:1 5, 2017 Nearly-tight VC-dimension bounds for piecewise linear neural networks Nick Harvey Christopher Liaw Abbas Mehrabian Department of Computer Science,

More information

When can Deep Networks avoid the curse of dimensionality and other theoretical puzzles

When can Deep Networks avoid the curse of dimensionality and other theoretical puzzles When can Deep Networks avoid the curse of dimensionality and other theoretical puzzles Tomaso Poggio, MIT, CBMM Astar CBMM s focus is the Science and the Engineering of Intelligence We aim to make progress

More information

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Learning Deep Architectures for AI. Part I - Vijay Chakilam Learning Deep Architectures for AI - Yoshua Bengio Part I - Vijay Chakilam Chapter 0: Preliminaries Neural Network Models The basic idea behind the neural network approach is to model the response as a

More information

Neural Networks and Rational Functions

Neural Networks and Rational Functions Neural Networks and Rational Functions Matus Telgarsky 1 Abstract Neural networks and ional functions efficiently approximate each other. In more detail, it is shown here that for any ReLU network, there

More information

Introduction to Machine Learning Spring 2018 Note Neural Networks

Introduction to Machine Learning Spring 2018 Note Neural Networks CS 189 Introduction to Machine Learning Spring 2018 Note 14 1 Neural Networks Neural networks are a class of compositional function approximators. They come in a variety of shapes and sizes. In this class,

More information

UNDERSTANDING LOCAL MINIMA IN NEURAL NET-

UNDERSTANDING LOCAL MINIMA IN NEURAL NET- UNDERSTANDING LOCAL MINIMA IN NEURAL NET- WORKS BY LOSS SURFACE DECOMPOSITION Anonymous authors Paper under double-blind review ABSTRACT To provide principled ways of designing proper Deep Neural Network

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Deep Belief Networks are compact universal approximators

Deep Belief Networks are compact universal approximators 1 Deep Belief Networks are compact universal approximators Nicolas Le Roux 1, Yoshua Bengio 2 1 Microsoft Research Cambridge 2 University of Montreal Keywords: Deep Belief Networks, Universal Approximation

More information

THE POWER OF DEEPER NETWORKS

THE POWER OF DEEPER NETWORKS THE POWER OF DEEPER NETWORKS FOR EXPRESSING NATURAL FUNCTIONS David Rolnic, Max Tegmar Massachusetts Institute of Technology {drolnic, tegmar}@mit.edu ABSTRACT It is well-nown that neural networs are universal

More information

Lecture 3 Feedforward Networks and Backpropagation

Lecture 3 Feedforward Networks and Backpropagation Lecture 3 Feedforward Networks and Backpropagation CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 3, 2017 Things we will look at today Recap of Logistic Regression

More information

A summary of Deep Learning without Poor Local Minima

A summary of Deep Learning without Poor Local Minima A summary of Deep Learning without Poor Local Minima by Kenji Kawaguchi MIT oral presentation at NIPS 2016 Learning Supervised (or Predictive) learning Learn a mapping from inputs x to outputs y, given

More information

Neural networks COMS 4771

Neural networks COMS 4771 Neural networks COMS 4771 1. Logistic regression Logistic regression Suppose X = R d and Y = {0, 1}. A logistic regression model is a statistical model where the conditional probability function has a

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties. Philipp Christian Petersen

Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties. Philipp Christian Petersen Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties Philipp Christian Petersen Joint work Joint work with: Helmut Bölcskei (ETH Zürich) Philipp Grohs

More information

On the Quality of the Initial Basin in Overspecified Neural Networks

On the Quality of the Initial Basin in Overspecified Neural Networks Itay Safran Ohad Shamir Weizmann Institute of Science, Israel ITAY.SAFRAN@WEIZMANN.AC.IL OHAD.SHAMIR@WEIZMANN.AC.IL Abstract Deep learning, in the form of artificial neural networks, has achieved remarkable

More information

1 What a Neural Network Computes

1 What a Neural Network Computes Neural Networks 1 What a Neural Network Computes To begin with, we will discuss fully connected feed-forward neural networks, also known as multilayer perceptrons. A feedforward neural network consists

More information

Notes on Back Propagation in 4 Lines

Notes on Back Propagation in 4 Lines Notes on Back Propagation in 4 Lines Lili Mou moull12@sei.pku.edu.cn March, 2015 Congratulations! You are reading the clearest explanation of forward and backward propagation I have ever seen. In this

More information

Lecture 3 Feedforward Networks and Backpropagation

Lecture 3 Feedforward Networks and Backpropagation Lecture 3 Feedforward Networks and Backpropagation CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 3, 2017 Things we will look at today Recap of Logistic Regression

More information

Efficiently Training Sum-Product Neural Networks using Forward Greedy Selection

Efficiently Training Sum-Product Neural Networks using Forward Greedy Selection Efficiently Training Sum-Product Neural Networks using Forward Greedy Selection Shai Shalev-Shwartz School of CS and Engineering, The Hebrew University of Jerusalem Greedy Algorithms, Frank-Wolfe and Friends

More information

Error bounds for approximations with deep ReLU networks

Error bounds for approximations with deep ReLU networks Error bounds for approximations with deep ReLU networks arxiv:1610.01145v3 [cs.lg] 1 May 2017 Dmitry Yarotsky d.yarotsky@skoltech.ru May 2, 2017 Abstract We study expressive power of shallow and deep neural

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

arxiv: v1 [cs.lg] 8 Mar 2017

arxiv: v1 [cs.lg] 8 Mar 2017 Nearly-tight VC-dimension bounds for piecewise linear neural networks arxiv:1703.02930v1 [cs.lg] 8 Mar 2017 Nick Harvey Chris Liaw Abbas Mehrabian March 9, 2017 Abstract We prove new upper and lower bounds

More information

Region Description for Recognition

Region Description for Recognition Region Description for Recognition For object recognition, descriptions of regions in an image have to be compared with descriptions of regions of meaningful objects (models). The general problem of object

More information

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Neural Networks. Nicholas Ruozzi University of Texas at Dallas Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify

More information

Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review

Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review International Journal of Automation and Computing DOI: 10.1007/s11633-017-1054-2 Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review Tomaso Poggio 1 Hrushikesh Mhaskar

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Expressive Efficiency and Inductive Bias of Convolutional Networks:

Expressive Efficiency and Inductive Bias of Convolutional Networks: Expressive Efficiency and Inductive Bias of Convolutional Networks: Analysis & Design via Hierarchical Tensor Decompositions Nadav Cohen The Hebrew University of Jerusalem AAAI Spring Symposium Series

More information

On the complexity of shallow and deep neural network classifiers

On the complexity of shallow and deep neural network classifiers On the complexity of shallow and deep neural network classifiers Monica Bianchini and Franco Scarselli Department of Information Engineering and Mathematics University of Siena Via Roma 56, I-53100, Siena,

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

arxiv: v3 [cs.lg] 8 Jun 2018

arxiv: v3 [cs.lg] 8 Jun 2018 Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions Quynh Nguyen Mahesh Chandra Mukkamala Matthias Hein 2 arxiv:803.00094v3 [cs.lg] 8 Jun 208 Abstract In the recent literature

More information

Are ResNets Provably Better than Linear Predictors?

Are ResNets Provably Better than Linear Predictors? Are ResNets Provably Better than Linear Predictors? Ohad Shamir Department of Computer Science and Applied Mathematics Weizmann Institute of Science Rehovot, Israel ohad.shamir@weizmann.ac.il Abstract

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Machine Learning

Machine Learning Machine Learning 10-315 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 03/29/2019 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter

More information

Ch.6 Deep Feedforward Networks (2/3)

Ch.6 Deep Feedforward Networks (2/3) Ch.6 Deep Feedforward Networks (2/3) 16. 10. 17. (Mon.) System Software Lab., Dept. of Mechanical & Information Eng. Woonggy Kim 1 Contents 6.3. Hidden Units 6.3.1. Rectified Linear Units and Their Generalizations

More information

Some Statistical Properties of Deep Networks

Some Statistical Properties of Deep Networks Some Statistical Properties of Deep Networks Peter Bartlett UC Berkeley August 2, 2018 1 / 22 Deep Networks Deep compositions of nonlinear functions h = h m h m 1 h 1 2 / 22 Deep Networks Deep compositions

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Nonlinear Models. Numerical Methods for Deep Learning. Lars Ruthotto. Departments of Mathematics and Computer Science, Emory University.

Nonlinear Models. Numerical Methods for Deep Learning. Lars Ruthotto. Departments of Mathematics and Computer Science, Emory University. Nonlinear Models Numerical Methods for Deep Learning Lars Ruthotto Departments of Mathematics and Computer Science, Emory University Intro 1 Course Overview Intro 2 Course Overview Lecture 1: Linear Models

More information

Expressive Efficiency and Inductive Bias of Convolutional Networks:

Expressive Efficiency and Inductive Bias of Convolutional Networks: Expressive Efficiency and Inductive Bias of Convolutional Networks: Analysis & Design via Hierarchical Tensor Decompositions Nadav Cohen The Hebrew University of Jerusalem AAAI Spring Symposium Series

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Deep Belief Networks are Compact Universal Approximators

Deep Belief Networks are Compact Universal Approximators Deep Belief Networks are Compact Universal Approximators Franck Olivier Ndjakou Njeunje Applied Mathematics and Scientific Computation May 16, 2016 1 / 29 Outline 1 Introduction 2 Preliminaries Universal

More information

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4 Neural Networks Learning the network: Backprop 11-785, Fall 2018 Lecture 4 1 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2 Recap:

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

In the Name of God. Lectures 15&16: Radial Basis Function Networks

In the Name of God. Lectures 15&16: Radial Basis Function Networks 1 In the Name of God Lectures 15&16: Radial Basis Function Networks Some Historical Notes Learning is equivalent to finding a surface in a multidimensional space that provides a best fit to the training

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Continuous Neural Networks

Continuous Neural Networks Continuous Neural Networks Nicolas Le Roux Université de Montréal Montréal, Québec nicolas.le.roux@umontreal.ca Yoshua Bengio Université de Montréal Montréal, Québec yoshua.bengio@umontreal.ca Abstract

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Algorithms for Learning Good Step Sizes

Algorithms for Learning Good Step Sizes 1 Algorithms for Learning Good Step Sizes Brian Zhang (bhz) and Manikant Tiwari (manikant) with the guidance of Prof. Tim Roughgarden I. MOTIVATION AND PREVIOUS WORK Many common algorithms in machine learning,

More information

Bits of Machine Learning Part 1: Supervised Learning

Bits of Machine Learning Part 1: Supervised Learning Bits of Machine Learning Part 1: Supervised Learning Alexandre Proutiere and Vahan Petrosyan KTH (The Royal Institute of Technology) Outline of the Course 1. Supervised Learning Regression and Classification

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Lecture 4: Deep Learning Essentials Pierre Geurts, Gilles Louppe, Louis Wehenkel 1 / 52 Outline Goal: explain and motivate the basic constructs of neural networks. From linear

More information

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms Neural networks Comments Assignment 3 code released implement classification algorithms use kernels for census dataset Thought questions 3 due this week Mini-project: hopefully you have started 2 Example:

More information

Accelerating Stochastic Optimization

Accelerating Stochastic Optimization Accelerating Stochastic Optimization Shai Shalev-Shwartz School of CS and Engineering, The Hebrew University of Jerusalem and Mobileye Master Class at Tel-Aviv, Tel-Aviv University, November 2014 Shalev-Shwartz

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

Theory IIIb: Generalization in Deep Networks

Theory IIIb: Generalization in Deep Networks CBMM Memo No. 90 June 29, 2018 Theory IIIb: Generalization in Deep Networks Tomaso Poggio 1, Qianli Liao 1, Brando Miranda 1, Andrzej Banburski 1, Xavier Boix 1 and Jack Hidary 2 1 Center for Brains, Minds,

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16 Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16 Slides adapted from Jordan Boyd-Graber, Justin Johnson, Andrej Karpathy, Chris Ketelsen, Fei-Fei Li, Mike Mozer, Michael Nielson

More information

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation) Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation

More information

CSC 411 Lecture 10: Neural Networks

CSC 411 Lecture 10: Neural Networks CSC 411 Lecture 10: Neural Networks Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 10-Neural Networks 1 / 35 Inspiration: The Brain Our brain has 10 11

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

SGD and Deep Learning

SGD and Deep Learning SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients

More information

The Power of Approximating: a Comparison of Activation Functions

The Power of Approximating: a Comparison of Activation Functions The Power of Approximating: a Comparison of Activation Functions Bhaskar DasGupta Department of Computer Science University of Minnesota Minneapolis, MN 55455-0159 email: dasgupta~cs.umn.edu Georg Schnitger

More information

Feedforward Neural Networks

Feedforward Neural Networks Feedforward Neural Networks Michael Collins 1 Introduction In the previous notes, we introduced an important class of models, log-linear models. In this note, we describe feedforward neural networks, which

More information

Understanding Neural Networks : Part I

Understanding Neural Networks : Part I TensorFlow Workshop 2018 Understanding Neural Networks Part I : Artificial Neurons and Network Optimization Nick Winovich Department of Mathematics Purdue University July 2018 Outline 1 Neural Networks

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Neural Networks: A brief touch Yuejie Chi Department of Electrical and Computer Engineering Spring 2018 1/41 Outline

More information

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global homas Laurent * 1 James H. von Brecht * 2 Abstract We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima

More information

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04 Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures Lecture 04 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Self-Taught Learning 1. Learn

More information

Deep Learning and Information Theory

Deep Learning and Information Theory Deep Learning and Information Theory Bhumesh Kumar (13D070060) Alankar Kotwal (12D070010) November 21, 2016 Abstract T he machine learning revolution has recently led to the development of a new flurry

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Overparametrization for Landscape Design in Non-convex Optimization

Overparametrization for Landscape Design in Non-convex Optimization Overparametrization for Landscape Design in Non-convex Optimization Jason D. Lee University of Southern California September 19, 2018 The State of Non-Convex Optimization Practical observation: Empirically,

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 6-4-2007 Adaptive Online Gradient Descent Peter Bartlett Elad Hazan Alexander Rakhlin University of Pennsylvania Follow

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Day 3 Lecture 3. Optimizing deep networks

Day 3 Lecture 3. Optimizing deep networks Day 3 Lecture 3 Optimizing deep networks Convex optimization A function is convex if for all α [0,1]: f(x) Tangent line Examples Quadratics 2-norms Properties Local minimum is global minimum x Gradient

More information

The sample complexity of agnostic learning with deterministic labels

The sample complexity of agnostic learning with deterministic labels The sample complexity of agnostic learning with deterministic labels Shai Ben-David Cheriton School of Computer Science University of Waterloo Waterloo, ON, N2L 3G CANADA shai@uwaterloo.ca Ruth Urner College

More information

Lecture 13: Introduction to Neural Networks

Lecture 13: Introduction to Neural Networks Lecture 13: Introduction to Neural Networks Instructor: Aditya Bhaskara Scribe: Dietrich Geisler CS 5966/6966: Theory of Machine Learning March 8 th, 2017 Abstract This is a short, two-line summary of

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Yongjin Park 1 Goal of Feedforward Networks Deep Feedforward Networks are also called as Feedforward neural networks or Multilayer Perceptrons Their Goal: approximate some function

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Online Convex Optimization

Online Convex Optimization Advanced Course in Machine Learning Spring 2010 Online Convex Optimization Handouts are jointly prepared by Shie Mannor and Shai Shalev-Shwartz A convex repeated game is a two players game that is performed

More information

Classical generalization bounds are surprisingly tight for Deep Networks

Classical generalization bounds are surprisingly tight for Deep Networks CBMM Memo No. 9 July, 28 Classical generalization bounds are surprisingly tight for Deep Networks Qianli Liao, Brando Miranda, Jack Hidary 2 and Tomaso Poggio Center for Brains, Minds, and Machines, MIT

More information

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Backpropagation Matt Gormley Lecture 12 Feb 23, 2018 1 Neural Networks Outline

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Deep Feedforward Networks. Seung-Hoon Na Chonbuk National University

Deep Feedforward Networks. Seung-Hoon Na Chonbuk National University Deep Feedforward Networks Seung-Hoon Na Chonbuk National University Neural Network: Types Feedforward neural networks (FNN) = Deep feedforward networks = multilayer perceptrons (MLP) No feedback connections

More information

Expressiveness of Rectifier Networks

Expressiveness of Rectifier Networks Xingyuan Pan Vivek Srikumar The University of Utah, Salt Lake City, UT 84112, USA XPAN@CS.UTAH.EDU SVIVEK@CS.UTAH.EDU Abstract Rectified Linear Units (ReLUs have been shown to ameliorate the vanishing

More information

Empirical Risk Minimization

Empirical Risk Minimization Empirical Risk Minimization Fabrice Rossi SAMM Université Paris 1 Panthéon Sorbonne 2018 Outline Introduction PAC learning ERM in practice 2 General setting Data X the input space and Y the output space

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

IFT Lecture 7 Elements of statistical learning theory

IFT Lecture 7 Elements of statistical learning theory IFT 6085 - Lecture 7 Elements of statistical learning theory This version of the notes has not yet been thoroughly checked. Please report any bugs to the scribes or instructor. Scribe(s): Brady Neal and

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,

More information

On Learnability, Complexity and Stability

On Learnability, Complexity and Stability On Learnability, Complexity and Stability Silvia Villa, Lorenzo Rosasco and Tomaso Poggio 1 Introduction A key question in statistical learning is which hypotheses (function) spaces are learnable. Roughly

More information

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18 CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information