The power of deep networks and learning

Shabnam Kadir's journal club session, where she will present the papers "The power of deeper networks for expressing natural functions (David Rolnick and Max Tegmark, 2018)" and "Why does deep and cheap learning work so well? (Henry W. Lin, Max Tegmark and David Rolnick, 2017)".


Abstracts:

"The power of deeper networks for expressing natural functions (David Rolnick and Max Tegmark, 2018)"

It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly with n for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from 1 to k, the neuron requirement grows exponentially not with n but with n1/k, suggesting that the minimum number of layers required for practical expressibility grows only logarithmically with n."

"Why does deep and cheap learning work so well? (Henry W. Lin, Max Tegmark and David Rolnick, 2017)"

We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through "cheap learning" with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various "no-flattening theorems" showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss, for example, we show that n variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer."

Date: 01/02/2019
Time: 16:00
Location: D118

Share this post on: Twitter| Facebook| Google+| Email