CSpace
AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*
Yan, Liang1,2; Zhou, Tao3
2021
Source PublicationJOURNAL OF COMPUTATIONAL MATHEMATICS
ISSN0254-9409
Volume39Issue:6Pages:848-864
AbstractRandomize-then-optimize (RTO) is widely used for sampling from posterior distribu-tions in Bayesian inverse problems. However, RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel goal-oriented deep neural networks (DNN) sur-rogate approach to substantially reduce the computation burden of RTO. In particular, we propose to drawn the training points for the DNN-surrogate from a local approximated posterior distribution - yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach. We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO ap-proach, which shows that DNN-RTO can significantly outperform the traditional RTO.
KeywordBayesian inverse problems Deep neural network Markov chain Monte Carlo
DOI10.4208/jcm.2102-m2020-0339
Indexed BySCI
Language英语
Funding ProjectNSF of China[11771081] ; NSF of China[11822111] ; NSF of China[11688101] ; NSF of China[11731006] ; science challenge project, China[TZ2018001] ; Zhishan Young Scholar Program of SEU, China ; National Key R&D Program of China[2020YFA0712000] ; Strategic Priority Research Program of Chinese Academy of Sciences[XDA25000404] ; youth innovation promotion association (CAS), China ; science challenge project[TZ2018001]
WOS Research AreaMathematics
WOS SubjectMathematics, Applied ; Mathematics
WOS IDWOS:000711024000003
PublisherGLOBAL SCIENCE PRESS
Citation statistics
Document Type期刊论文
Identifierhttp://ir.amss.ac.cn/handle/2S8OKBNM/59478
Collection中国科学院数学与系统科学研究院
Corresponding AuthorZhou, Tao
Affiliation1.Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
2.Nanjing Ctr Appl Math, Nanjing 211135, Peoples R China
3.Chinese Acad Sci, Acad Math & Syst Sci, LSEC, Inst Computat Math & Sci Engn Comp, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Yan, Liang,Zhou, Tao. AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*[J]. JOURNAL OF COMPUTATIONAL MATHEMATICS,2021,39(6):848-864.
APA Yan, Liang,&Zhou, Tao.(2021).AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*.JOURNAL OF COMPUTATIONAL MATHEMATICS,39(6),848-864.
MLA Yan, Liang,et al."AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*".JOURNAL OF COMPUTATIONAL MATHEMATICS 39.6(2021):848-864.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yan, Liang]'s Articles
[Zhou, Tao]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yan, Liang]'s Articles
[Zhou, Tao]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yan, Liang]'s Articles
[Zhou, Tao]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.