CSpace
Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
Jin, Pengzhan1,2; Lu, Lu3; Tang, Yifa1,2; Karniadakis, George Em3
2020-10-01
发表期刊NEURAL NETWORKS
ISSN0893-6080
卷号130页码:85-99
摘要The accuracy of deep learning, i.e., deep neural networks, can be characterized by dividing the total error into three main types: approximation error, optimization error, and generalization error. Whereas there are some satisfactory answers to the problems of approximation and optimization, much is known about the theory of generalization. Most existing theoretical works for generalization to explain the performance of neural networks in practice. To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness. We introduce the cover complexity (CC) to measure the difficulty learning a data set and the inverse of the modulus of continuity to quantify neural network smoothness. A quantitative bound for expected accuracy/error is derived by considering both the CC and neural network smoothness. Although most of the analysis is general and not specific to neural networks, we validate our theoretical assumptions and results numerically for neural networks by several data sets of images. The numerical results confirm that the expected error of trained networks scaled with the square root of the number of classes has a linear relationship with respect to the CC. We observe a clear consistency between test loss and neural network smoothness during the training process. In addition, we demonstrate empirically that the neural network smoothness decreases when the network size increases whereas the smoothness is insensitive to training dataset size. (C) 2020 Elsevier Ltd. All rights reserved.
关键词Neural networks Generalization error Learnability Data distribution Cover complexity Neural network smoothness
DOI10.1016/j.neunet.2020.06.024
收录类别SCI
语种英语
资助项目DOE PhILMs project[de-sc0019453] ; AFOSR[FA9550-17-1-0013] ; Major Project on New Generation of Artificial Intelligence from MOST of China[2018AAA0101002] ; National Natural Science Foundation of China[11771438] ; DARPA AIRA grant[HR00111990025]
WOS研究方向Computer Science ; Neurosciences & Neurology
WOS类目Computer Science, Artificial Intelligence ; Neurosciences
WOS记录号WOS:000567813200009
出版者PERGAMON-ELSEVIER SCIENCE LTD
引用统计
文献类型期刊论文
条目标识符http://ir.amss.ac.cn/handle/2S8OKBNM/52161
专题中国科学院数学与系统科学研究院
通讯作者Karniadakis, George Em
作者单位1.Chinese Acad Sci, Acad Math & Syst Sci, ICMSEC, LSEC, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
3.Brown Univ, Div Appl Math, Providence, RI 02912 USA
推荐引用方式
GB/T 7714
Jin, Pengzhan,Lu, Lu,Tang, Yifa,et al. Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness[J]. NEURAL NETWORKS,2020,130:85-99.
APA Jin, Pengzhan,Lu, Lu,Tang, Yifa,&Karniadakis, George Em.(2020).Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness.NEURAL NETWORKS,130,85-99.
MLA Jin, Pengzhan,et al."Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness".NEURAL NETWORKS 130(2020):85-99.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Jin, Pengzhan]的文章
[Lu, Lu]的文章
[Tang, Yifa]的文章
百度学术
百度学术中相似的文章
[Jin, Pengzhan]的文章
[Lu, Lu]的文章
[Tang, Yifa]的文章
必应学术
必应学术中相似的文章
[Jin, Pengzhan]的文章
[Lu, Lu]的文章
[Tang, Yifa]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。