Are Saddles Good Enough for Deep Learning?

Adepu, R S and Balasubramanian, V N (2017) Are Saddles Good Enough for Deep Learning? arXiv. pp. 1-16. (Submitted)

[img]
Preview
Text (arXiv copy)
1706.02052.pdf - Accepted Version

Download (451kB) | Preview

Abstract

Recent years have seen a growing interest in understanding deep neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training deep networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as `good saddles'. We also verified the famous Wigner's Semicircle Law in our experimental results.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Balasubramanian, V NUNSPECIFIED
Item Type: Article
Subjects: Computer science > Computer programming, programs, data
Computer science > Special computer methods
Divisions: Department of Computer Science & Engineering
Depositing User: Team Library
Date Deposited: 20 Jun 2017 09:31
Last Modified: 20 Jun 2017 09:31
URI: http://raiith.iith.ac.in/id/eprint/3273
Publisher URL: https://arxiv.org/abs/1706.02052
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 3273 Statistics for this ePrint Item