KMS Of Academy of mathematics and systems sciences, CAS
Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme | |
其他题名 | Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme |
Yi Peng; Hong Yiguang | |
2015 | |
发表期刊 | Control Theory and Technology |
ISSN | 2095-6983 |
卷号 | 13期号:4页码:333-347 |
摘要 | In this paper, we consider a distributed convex optimization problem of a multi-agent system with the global objective function as the sum of agents’ individual objective functions. To solve such an optimization problem, we propose a distributed stochastic sub-gradient algorithm with random sleep scheme. In the random sleep scheme, each agent independently and randomly decides whether to inquire the sub-gradient information of its local objective function at each iteration. The algorithm not only generalizes distributed algorithms with variable working nodes and multi-step consensus-based algorithms, but also extends some existing randomized convex set intersection results. We investigate the algorithm convergence properties under two types of stepsizes: the randomized diminishing stepsize that is heterogeneous and calculated by individual agent, and the fixed stepsize that is homogeneous. Then we prove that the estimates of the agents reach consensus almost surely and in mean, and the consensus point is the optimal solution with probability 1, both under randomized stepsize. Moreover, we analyze the algorithm error bound under fixed homogeneous stepsize, and also show how the errors depend on the fixed stepsize and update rates. |
其他摘要 | In this paper, we consider a distributed convex optimization problem of a multi-agent system with the global objective function as the sum of agents’ individual objective functions. To solve such an optimization problem, we propose a distributed stochastic sub-gradient algorithm with random sleep scheme. In the random sleep scheme, each agent independently and randomly decides whether to inquire the sub-gradient information of its local objective function at each iteration. The algorithm not only generalizes distributed algorithms with variable working nodes and multi-step consensus-based algorithms, but also extends some existing randomized convex set intersection results. We investigate the algorithm convergence properties under two types of stepsizes: the randomized diminishing stepsize that is heterogeneous and calculated by individual agent, and the fixed stepsize that is homogeneous. Then we prove that the estimates of the agents reach consensus almost surely and in mean, and the consensus point is the optimal solution with probability 1, both under randomized stepsize. Moreover, we analyze the algorithm error bound under fixed homogeneous stepsize, and also show how the errors depend on the fixed stepsize and update rates. |
关键词 | Distributed optimization sub-gradient algorithm random sleep multi-agent systems randomized algorithm |
收录类别 | CSCD |
语种 | 英语 |
CSCD记录号 | CSCD:5622332 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.amss.ac.cn/handle/2S8OKBNM/52620 |
专题 | 中国科学院数学与系统科学研究院 |
作者单位 | 中国科学院数学与系统科学研究院 |
推荐引用方式 GB/T 7714 | Yi Peng,Hong Yiguang. Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme[J]. Control Theory and Technology,2015,13(4):333-347. |
APA | Yi Peng,&Hong Yiguang.(2015).Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme.Control Theory and Technology,13(4),333-347. |
MLA | Yi Peng,et al."Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme".Control Theory and Technology 13.4(2015):333-347. |
条目包含的文件 | 条目无相关文件。 |
个性服务 |
推荐该条目 |
保存到收藏夹 |
查看访问统计 |
导出为Endnote文件 |
谷歌学术 |
谷歌学术中相似的文章 |
[Yi Peng]的文章 |
[Hong Yiguang]的文章 |
百度学术 |
百度学术中相似的文章 |
[Yi Peng]的文章 |
[Hong Yiguang]的文章 |
必应学术 |
必应学术中相似的文章 |
[Yi Peng]的文章 |
[Hong Yiguang]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论