Status | 已发表Published |
Title | A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems |
Creator | |
Date Issued | 2022-01-10 |
Source Publication | Knowledge-Based Systems
![]() |
ISSN | 0950-7051 |
Volume | 235 |
Abstract | Many previous energy-efficient computation optimization works for mobile edge computing (MEC) systems have been based on the assumption of synchronous offloading, where all mobile devices have the same data arrival time or calculation deadline in orthogonal frequency division multiple access (OFDMA) or time division multiple access (TDMA) systems. However, the actual offloading situations are more complex than synchronous offloading following the first-come, first-served rule. In this paper, we study a polling callback energy-saving offloading strategy, that is, the arrival time of data transmission and task processing time are asynchronous. Under the constraints of task processing time, the time-sharing MEC data transmission problem is modeled as the total energy consumption minimization model. Using the semi-closed form optimization technology, energy consumption optimization is transformed into two subproblems: computation (data partition) and transmission (time division). To reduce the computational complexity of offloading computation under time-varying channel conditions, we propose a game-learning algorithm, that combines DDQN and distributed LMST with intermediate state transition (named DDQNL-IST). DDQNL-IST combines distributed LSTM and double-Q learning as part of the approximator to improve the ability of processing and predicting time intervals and delays in time series. The proposed DDQNL-IST algorithm ensures rationality and convergence. Finally, the simulation results show that our proposed algorithm performs better than the DDQN, DQN and BCD-based optimal methods. |
Keyword | Computation offloading Deep reinforcement learning Edge computing Energy-efficient Game-learning |
DOI | 10.1016/j.knosys.2021.107660 |
URL | View source |
Indexed By | SCIE |
Language | 英语English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Artificial Intelligence |
WOS ID | WOS:000721035100005 |
Scopus ID | 2-s2.0-85118901478 |
Citation statistics | |
Document Type | Journal article |
Identifier | http://repository.uic.edu.cn/handle/39GCC9TT/7028 |
Collection | Research outside affiliated institution |
Corresponding Author | Liu, Anfeng |
Affiliation | 1.School of Computer Science and Engineering, Central South University, ChangSha, 410083, China 2.School of Informatics, Hunan University of Chinese Medicine, Changsha, 410208, China 3.College of Computer Science and Technology, Huaqiao University, Xiamen, 361021, China 4.School of Computer Science and Engineering of the Hunan University of Science and Technology, Xiangtan, 411201, China |
Recommended Citation GB/T 7714 | Chen, Miaojiang,Liu, Wei,Wang, Tianet al. A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems[J]. Knowledge-Based Systems, 2022, 235. |
APA | Chen, Miaojiang, Liu, Wei, Wang, Tian, Zhang, Shaobo, & Liu, Anfeng. (2022). A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems. Knowledge-Based Systems, 235. |
MLA | Chen, Miaojiang,et al."A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems". Knowledge-Based Systems 235(2022). |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment