ID 原文 译文
18465 为了满足无线数据流量大幅增长的需求,异构云无线接入网(H-CRAN)的资源优化仍然是亟待解决的重要问题。 In order to meet the demand of the substantial increase of wireless data traffic, the resourceoptimization of the Heterogeneous Cloud Radio Access Network (H-CRAN) is still an important problem thatneeds to be solved urgently.
18466 该文在H-CRAN下行链路场景下,提出一种基于深度强化学习(DRL)的无线资源分配算法。 In this paper, under the H-CRAN downlink scenario, a wireless resource allocationalgorithm based on Deep Reinforcement Learning (DRL) is proposed.
18467 首先,该算法以队列稳定为约束,联合优化拥塞控制、用户关联、子载波分配和功率分配,并建立网络总吞吐量最大化的随机优化模型。 Firstly, a stochastic optimization modelfor maximizing the total network throughput is established to jointly optimize the congestion control, the userassociation, subcarrier allocation and the power allocation under the constraint of queue stability.
18468 其次,考虑到调度问题的复杂性,DRL算法利用神经网络作为非线性近似函数,高效地解决维度灾问题。 Secondly, considering the complexity of scheduling problem, the DRL algorithm uses neural network as nonlinear approximate function to solve the dimensional disaster problem efficiently.
18469 最后,针对无线网络环境的复杂性和动态多变性,引入迁移学习(TL)算法,利用TL的小样本学习特性,使得DRL算法在少量样本的情况下也能获得最优的资源分配策略。 Finally, considering the complexity and dynamic variability of the wireless network environment, the Transfer Learning(TL) algorithm is introduced to make use of the small sample learning characteristics of TL so that the DRL algorithm can obtain the optimal resource allocation strategy in the case of insufficient samples.
18470 此外,TL通过迁移DRL模型的权重参数,进一步地加快了DRL算法的收敛速度。 In addition, TL furtheraccelerates the convergence rate of DRL algorithm by transferring the weight parameters of DRL model.
18471 仿真结果表明,该文所提算法可以有效地增加网络吞吐量,提高网络的稳定性。 Simulation results show that the proposed algorithm can effectively increase network throughput and improve network stability.
18472 物联网(IoT)的发展引起流数据在数据量和数据类型两方面不断增长。由于实时处理场景的不断增加和基于经验知识的配置策略存在缺陷,流处理检查点配置策略面临着巨大的挑战,如费事费力,易导致系统异常等。 Since real-time processing scenarios for ever-increasing amount and type of streaming data caused by the development of the Internet of Things (IoT) keep increasing, and strategies based on empirical knowledge for checkpoint configuration are deficiencies, the strategy faces huge challenges, such as time-consuming, labor-intensive, causing system anomalies, etc.
18473 为解决这些挑战,该文提出基于回归算法的检查点性能预测方法。 To address these challenges, regression algorithm-based prediction isproposed for checkpoint performance.
18474 该方法首先分析了影响检查点性能的6种特征,然后将训练集的特征向量输入到随机森林回归算法中进行训练,最后,使用训练好的算法对测试数据集进行预测。 Firstly, six kinds of features, which have a huge influence on theperformance, are analyzed, and then feature vectors of the training set are input into the regression algorithmsfor training, finally, test sets are used for the checkpoint performance prediction.