发表状态 | 已发表Published |
题名 | A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems |
作者 | |
发表日期 | 2022-01-10 |
发表期刊 | Knowledge-Based Systems
![]() |
ISSN/eISSN | 0950-7051 |
卷号 | 235 |
摘要 | Many previous energy-efficient computation optimization works for mobile edge computing (MEC) systems have been based on the assumption of synchronous offloading, where all mobile devices have the same data arrival time or calculation deadline in orthogonal frequency division multiple access (OFDMA) or time division multiple access (TDMA) systems. However, the actual offloading situations are more complex than synchronous offloading following the first-come, first-served rule. In this paper, we study a polling callback energy-saving offloading strategy, that is, the arrival time of data transmission and task processing time are asynchronous. Under the constraints of task processing time, the time-sharing MEC data transmission problem is modeled as the total energy consumption minimization model. Using the semi-closed form optimization technology, energy consumption optimization is transformed into two subproblems: computation (data partition) and transmission (time division). To reduce the computational complexity of offloading computation under time-varying channel conditions, we propose a game-learning algorithm, that combines DDQN and distributed LMST with intermediate state transition (named DDQNL-IST). DDQNL-IST combines distributed LSTM and double-Q learning as part of the approximator to improve the ability of processing and predicting time intervals and delays in time series. The proposed DDQNL-IST algorithm ensures rationality and convergence. Finally, the simulation results show that our proposed algorithm performs better than the DDQN, DQN and BCD-based optimal methods. |
关键词 | Computation offloading Deep reinforcement learning Edge computing Energy-efficient Game-learning |
DOI | 10.1016/j.knosys.2021.107660 |
URL | 查看来源 |
收录类别 | SCIE |
语种 | 英语English |
WOS研究方向 | Computer Science |
WOS类目 | Computer Science, Artificial Intelligence |
WOS记录号 | WOS:000721035100005 |
Scopus入藏号 | 2-s2.0-85118901478 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | https://repository.uic.edu.cn/handle/39GCC9TT/7028 |
专题 | 个人在本单位外知识产出 |
通讯作者 | Liu, Anfeng |
作者单位 | 1.School of Computer Science and Engineering, Central South University, ChangSha, 410083, China 2.School of Informatics, Hunan University of Chinese Medicine, Changsha, 410208, China 3.College of Computer Science and Technology, Huaqiao University, Xiamen, 361021, China 4.School of Computer Science and Engineering of the Hunan University of Science and Technology, Xiangtan, 411201, China |
推荐引用方式 GB/T 7714 | Chen, Miaojiang,Liu, Wei,Wang, Tianet al. A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems[J]. Knowledge-Based Systems, 2022, 235. |
APA | Chen, Miaojiang, Liu, Wei, Wang, Tian, Zhang, Shaobo, & Liu, Anfeng. (2022). A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems. Knowledge-Based Systems, 235. |
MLA | Chen, Miaojiang,et al."A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems". Knowledge-Based Systems 235(2022). |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论