CSpace
AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*
Yan, Liang1,2; Zhou, Tao3
2021
发表期刊JOURNAL OF COMPUTATIONAL MATHEMATICS
ISSN0254-9409
卷号39期号:6页码:848-864
摘要Randomize-then-optimize (RTO) is widely used for sampling from posterior distribu-tions in Bayesian inverse problems. However, RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel goal-oriented deep neural networks (DNN) sur-rogate approach to substantially reduce the computation burden of RTO. In particular, we propose to drawn the training points for the DNN-surrogate from a local approximated posterior distribution - yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach. We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO ap-proach, which shows that DNN-RTO can significantly outperform the traditional RTO.
关键词Bayesian inverse problems Deep neural network Markov chain Monte Carlo
DOI10.4208/jcm.2102-m2020-0339
收录类别SCI
语种英语
资助项目NSF of China[11771081] ; NSF of China[11822111] ; NSF of China[11688101] ; NSF of China[11731006] ; science challenge project, China[TZ2018001] ; Zhishan Young Scholar Program of SEU, China ; National Key R&D Program of China[2020YFA0712000] ; Strategic Priority Research Program of Chinese Academy of Sciences[XDA25000404] ; youth innovation promotion association (CAS), China ; science challenge project[TZ2018001]
WOS研究方向Mathematics
WOS类目Mathematics, Applied ; Mathematics
WOS记录号WOS:000711024000003
出版者GLOBAL SCIENCE PRESS
引用统计
文献类型期刊论文
条目标识符http://ir.amss.ac.cn/handle/2S8OKBNM/59478
专题中国科学院数学与系统科学研究院
通讯作者Zhou, Tao
作者单位1.Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
2.Nanjing Ctr Appl Math, Nanjing 211135, Peoples R China
3.Chinese Acad Sci, Acad Math & Syst Sci, LSEC, Inst Computat Math & Sci Engn Comp, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Yan, Liang,Zhou, Tao. AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*[J]. JOURNAL OF COMPUTATIONAL MATHEMATICS,2021,39(6):848-864.
APA Yan, Liang,&Zhou, Tao.(2021).AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*.JOURNAL OF COMPUTATIONAL MATHEMATICS,39(6),848-864.
MLA Yan, Liang,et al."AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*".JOURNAL OF COMPUTATIONAL MATHEMATICS 39.6(2021):848-864.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yan, Liang]的文章
[Zhou, Tao]的文章
百度学术
百度学术中相似的文章
[Yan, Liang]的文章
[Zhou, Tao]的文章
必应学术
必应学术中相似的文章
[Yan, Liang]的文章
[Zhou, Tao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。