科研成果详情

发表状态已发表Published
题名A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems
作者
发表日期2022-01-10
发表期刊Knowledge-Based Systems
ISSN/eISSN0950-7051
卷号235
摘要

Many previous energy-efficient computation optimization works for mobile edge computing (MEC) systems have been based on the assumption of synchronous offloading, where all mobile devices have the same data arrival time or calculation deadline in orthogonal frequency division multiple access (OFDMA) or time division multiple access (TDMA) systems. However, the actual offloading situations are more complex than synchronous offloading following the first-come, first-served rule. In this paper, we study a polling callback energy-saving offloading strategy, that is, the arrival time of data transmission and task processing time are asynchronous. Under the constraints of task processing time, the time-sharing MEC data transmission problem is modeled as the total energy consumption minimization model. Using the semi-closed form optimization technology, energy consumption optimization is transformed into two subproblems: computation (data partition) and transmission (time division). To reduce the computational complexity of offloading computation under time-varying channel conditions, we propose a game-learning algorithm, that combines DDQN and distributed LMST with intermediate state transition (named DDQNL-IST). DDQNL-IST combines distributed LSTM and double-Q learning as part of the approximator to improve the ability of processing and predicting time intervals and delays in time series. The proposed DDQNL-IST algorithm ensures rationality and convergence. Finally, the simulation results show that our proposed algorithm performs better than the DDQN, DQN and BCD-based optimal methods.

关键词Computation offloading Deep reinforcement learning Edge computing Energy-efficient Game-learning
DOI10.1016/j.knosys.2021.107660
URL查看来源
收录类别SCIE
语种英语English
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000721035100005
Scopus入藏号2-s2.0-85118901478
引用统计
文献类型期刊论文
条目标识符https://repository.uic.edu.cn/handle/39GCC9TT/7028
专题个人在本单位外知识产出
通讯作者Liu, Anfeng
作者单位
1.School of Computer Science and Engineering, Central South University, ChangSha, 410083, China
2.School of Informatics, Hunan University of Chinese Medicine, Changsha, 410208, China
3.College of Computer Science and Technology, Huaqiao University, Xiamen, 361021, China
4.School of Computer Science and Engineering of the Hunan University of Science and Technology, Xiangtan, 411201, China
推荐引用方式
GB/T 7714
Chen, Miaojiang,Liu, Wei,Wang, Tianet al. A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems[J]. Knowledge-Based Systems, 2022, 235.
APA Chen, Miaojiang, Liu, Wei, Wang, Tian, Zhang, Shaobo, & Liu, Anfeng. (2022). A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems. Knowledge-Based Systems, 235.
MLA Chen, Miaojiang,et al."A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems". Knowledge-Based Systems 235(2022).
条目包含的文件
条目无相关文件。
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Chen, Miaojiang]的文章
[Liu, Wei]的文章
[Wang, Tian]的文章
百度学术
百度学术中相似的文章
[Chen, Miaojiang]的文章
[Liu, Wei]的文章
[Wang, Tian]的文章
必应学术
必应学术中相似的文章
[Chen, Miaojiang]的文章
[Liu, Wei]的文章
[Wang, Tian]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。