|
1.INTRODUCTIONWith the rapid development of wireless communication technology Powered by advanced wireless communication technology, more and more mobile devices have wireless network access requirements. Although unloading computation-intensive applications to the cloud can overcome the limitation of mobile devices’ resources, for delaysensitive applications, long latency is obviously not enough for users1. Mobile Edge Computing (MEC) deploys computing nodes to the edge network to meet users’ requirements of low latency. This kind of research mainly focuses on uninstallation decision in multi-user environment and multi-server environment2,3. At present, some researchers use Particle Swarm Optimization (PSO) to solve the computational unloading problem. Wei et al.4 adopted greedy strategy PSO algorithm and dynamic PSO algorithm to propose a task allocation strategy for single-user multilateral cloud devices based on dynamic PSO to realize task allocation of single-user multilateral cloud equipments through dynamic PSO, considering the interdependence of tasks. Bi et al.5 used heuristic algorithms based on genetic simulated annealing and PSO to solve nonlinear constrained optimization problems, and jointly optimized the task unload ratio, CPU speed and transmission power, and bandwidth of available channels of each mobile device to achieve lower energy consumption. Luo et al.6 coded the MEC server of the edge cloud, and finally determined the server number corresponding to each task through the PSO algorithm. Miao et al.7 introduced compression factor and improved PSO by using simulated annealing algorithm to realize task unloading, in which particle coding corresponds to vehicle task allocation scheme. Some researchers also use caching technology to improve the optimization effect when calculating unloading. Nath et al.8 developed a dynamic scheduling strategy based on deep learning, which can cache popular tasks in MEC server to avoid repeated unloading. Lan et al.9 established a task cache optimization problem using random theory, and solved it using task cache algorithm based on genetic algorithm. Guo1 divided task cache and unload decision optimization into two sub-optimization problems, wherein task cache can be transformed into 0-1 integer programming problem. Through the analysis of the above research work, it can be found that most of the existing work is focused on particle coding or fusion of other algorithms for the unloading problem based on PSO in MEC scenarios. In addition, the optimization results of PSO combined with other algorithms are better than those of traditional algorithms, and the unloading delay optimization with task cache strategy is better than that without cache. Based on this, this paper proposes an unloading strategy based on Genetic Particle Swarm Optimization Algorithm (GA-PSO) and caching mechanism to cache on edge cloud, so as to effectively reduce the time delay of moving edge computing. Finally, the convergence and effectiveness of the proposed algorithm the convergence and validity of GA-PSO are proved by simulation. 2.SYSTEM MODELThe goal of the system is to achieve the optimal task caching and unloading and minimize the system delay when the edge cloud caching capacity is limited and the mobile device energy consumption constraints exist. The system goal can be achieved by caching and unloading tasks based on GA-PSO, and the system model is shown in Figure 1. This paper mainly solves two problems, whose tasks can be cached to the edge cloud, and what percentage of tasks can be offloaded to the edge and performed there. 2.1Scene descriptionThe mobile edge computing scenario in this paper consists of multiple mobile devices and a MEC edge cloud. Assume that the number of mobile devices is N, the number of computing tasks is I, user N = {1, 2, …, n}, and task I = {1, 2, …, i}. After the user sends the task request, the mobile device can connect with the edge cloud through the wireless channel. For task Un,i of mobile devices, it can be defined by binary groups (i.e., Un,I = {Ci di }), where Ci is the number of computing resources required to complete task I, that is, the total number of CPU cycles, and di is the data volume of task I, in bit. 2.2Time delay modelDifferent devices will cause different delays when running tasks. This section considers two situations, namely, the current task is carried out locally and the task is offloaded. The delays involved in the above situations include: local computing delay, data uploading delay of mobile devices, computing delay of MEC server, and feedback delay, which can be ignored because the feedback delay is far less than the uploading delay.
2.3Energy consumption model
2.4Problem formalizationFor the task caching problem, a decision variable X1i ∈ {0,1} can be defined. When X1i = 1, it means that the task I is cached to the edge cloud for execution; if the value is 0, it means that the task is not cached. In this case, only the delay of task execution in the edge cloud needs to be considered. There is an edge cloud cache resource constraint, that is, the total amount of cached task data must be smaller than the edge cloud cache size. For the task offload problem, the unloading ratio can be defined as X2i. ∈[0,1]. When X2i. = 0, tasks are executed locally; when X2i. = 1, all tasks are unloaded to the edge cloud; when X2i ∈(0,1), part of tasks are unloaded to the edge cloud and the rest are executed locally. To sum up, the total delay can be obtained as follows: Total energy consumption of mobile devices: The objective of the algorithm is to find the unload ratio and cache decision with the minimum delay of the system under the constraint of the maximum energy consumption of edge cloud resources and mobile devices. Therefore, the problem can be formalized as: The objective function means that the delay cost of the system is minimized through the task cache and the decision on the unloading ratio. Among them, constraint C1 indicates that the total amount of data for caching tasks cannot be greater than the caching capacity of the edge cloud, constraint C2 indicates that the task caching decision variables are binary variables, and 1 and 0 represent caching or not respectively, constraint C3 indicates that some divisible tasks are executed locally while the rest are executed on the edge cloud, and constraint C4 indicates that the power consumption of mobile devices should be controlled within the maximum power consumption constraint. 3.UNLOADING SCHEME3.1Particle codingIndividual particles are encoded in floating point numbers. Each particle element can take any floating-point number between 0 and 1.0001. The dimension of particle coding is the same as the total number of tasks. If the particle element value pop = 1.0001, it indicates that the task is cached to the edge cloud, and pop ∈ [0,1] indicates that the value is the unloading ratio. The particle velocity represents how fast the task unloaded to the edge cloud, denoted as V = {v1,v2,…,vi}. The particle velocity is initialized as a random floating-point number from -0.1 to 0.1, and the dimension encoded by the particle velocity should also be the same as the total number of tasks. The optimal position of each particle in the evolutionary iteration process is Gbest, and the optimal position of all particles is Zbest, which is the allocation method that minimizes the cost of the system. 4.SIMULATION AND RESULT ANALYSIS4.1Simulation environmentIn this section, an edge computing system composed of edge cloud and mobile devices is constructed by Matlab, and the unloading strategy based on GA-PSO and cache mechanism is implemented, and the performance of the strategy is evaluated in detail. Experimental parameters are shown in Table 1. Table 1.Simulation parameters.
4.2Analysis of resultsFigure 2 shows the comparison of the system delay cost of local computing and offloading all tasks to the MEC server, based on the GA-PSO algorithm without a caching mechanism, and an offloading strategy with a caching mechanism under different numbers of devices. It can be observed that the total system delay of the three types of strategies when the number of devices is 25, 50, 75, 100, 125, and as the number of devices increases, the delays of these four types of strategies are all showing an increasing trend, but the strategy in this article The system delay is less than the delay of the other three schemes. Figure 3 shows the relationship between edge cloud cache capacity, energy consumption of mobile devices and total system delay when GA-PSO unloading strategy, local computing and all tasks are unloaded to MEC server in this paper. The left ordinate is equipment power consumption, and the right ordinate is system delay. As can be seen from Figure 3, with the increase of cache resources, energy consumption tends to decrease, that is, the more task data that edge cloud can cache, the lower the energy consumption of mobile devices will be. In addition, the energy consumption of local computing is significantly higher than that of this strategy, indicating that the GA-PSO unloading strategy in this paper reduces the energy consumption of mobile devices to a certain extent, because the relevant data of some tasks have been cached in the edge cloud, and the execution of such tasks has no energy consumption for mobile devices. It can also be seen from Figure 3 that the larger the edge cloud cache capacity is, the smaller the system delay is, indicating that cache has a great impact on the delay. This is because the larger the cache capacity is, the more cacheable tasks are, and the execution time of cached tasks is shorter than that of non-cached tasks that need to be unloaded. 5.CONCLUSIONIn this paper, an unloading strategy based on GA-PSO and cache mechanism is proposed. This strategy combines the advantages of global search of genetic algorithm with the advantages of local search of PSO. The search speed is fast and the optimal unloading ratio and cache decision can be obtained. Simulation results indicate that compared with local computing, full offloading and no cache offloading strategies, the GA-PSO offloading strategy in this paper makes local devices and edge cloud cooperate in computing, which can reduce the delay cost of mobile edge computing. In the future, PSO can be combined with other optimization algorithms to find a more reliable unloading strategy. ACKNOWLEDGMENTThese works are supported by the Guangxi science and technology plan project of China (No. AD20297125). REFERENCESGuo, Y.,
“Task unloading strategy with caching mechanism in mobile edge computing,”
Computer Applications and Software, 36
(06), 114
–119
(2019). Google Scholar
Chen, X., Jiao, L., Li, W. and Fu, X.,
“Efficient multi-user computation offloading for mobile-edge cloud computing,” Transactions on Networking,”
24
(5), 2795
–2808
(2015). Google Scholar
Chen, M., Hao, Y., Qiu, M., Song, J., Wu, D. and Humar, I.,
“Mobility-aware caching and computation offloading in 5G ultra-dense cellular net-works,”
Sensors, 16
(7),
(2016). https://doi.org/10.3390/s16070974 Google Scholar
Wei, Q., Wei, F., Ge, H., Feng, A., Wang, Y. and Li, W.,
“Computational offloading strategy based on dynamic particle swarm for multi-user mobile edge computing,”
in 2019 IEEE Symp. Series on Computational Intelligence (SSCI),
2890
–2896
(2019). Google Scholar
Bi, J., Yuan, H., Duanmu, S., Zhou, M. C. and Abusorrah, A.,
“Energy-optimized partial computation offloading in mobile-edge computing with genetic simulated-annealing-based particle swarm optimization,”
IEEE Internet of Things Journal, 8
(5), 3774
–3785
(2021). https://doi.org/10.1109/JIoT.6488907 Google Scholar
Luo, B. and Yu, B.,
“Computing unloading strategy based on particle swarm optimization in moving edge computing,”
Journal of Computer Applications, 40
(08), 2293
–2298
(2020). Google Scholar
Miao, Y., Xu, Y., Zhang, W., Liu, T. and Han, Z.,
“Task Unloading strategy of improved particle swarm optimization algorithm in vehicle networking,”
Application Research of Computers, 38
(07), 2050
–2055
(2021). Google Scholar
Nath, S. and Wu, J.,
“Deep reinforcement learning for dynamic computation offloading and resource allocation in cache-assisted mobile edge computing systems,”
Intelligent and Converged Networks, 1
(2), 181
–198
(2020). https://doi.org/10.23919/TUP-ICN.9195266 Google Scholar
Lan, Y., Wang, X., Wang, D., Liu, Z. and Zhang, Y.,
“Task caching offloading and resource allocation in D2D-aided fog computing networks,”
IEEE Access, 7, 104876
–104891
(2019). https://doi.org/10.1109/Access.6287639 Google Scholar
Li, S., Ge, H. B., Chen, X. T., Liu, L., Gong, H. W. and Tang, R.,
“Computation offloading strategy for improved particle swarm optimization in mobile edge computing,”
in 2021 IEEE 6th Inter. Conf. on Cloud Computing and Big Data Analytics (ICCCBDA),
375
–381
(2021). Google Scholar
Li, M., Zhou, X., Qiu, T., Zhao, Q. and Li, K.,
“Multi-relay assisted computation offloading for multi-access edge computing systems with energy harvesting,”
IEEE Transactions on Vehicular Technology, 70
(10), 10941
–10956
(2021). https://doi.org/10.1109/TVT.2021.3108619 Google Scholar
|