ID |
原文 |
译文 |
18505 |
利用连接时效性特点设计哈希表冲突处理机制,根据表项最近命中时间判断是否进行覆写更新,避免规则累积导致查找时间增加; |
To avoid the accumulation of rules in the table, a collision handling mechanism is proposed. It judges whether to overwrite the Hash table entry which is collision according to the last hit time of the entry; |
18506 |
其次对ABV算法各维度进行等分处理,为各等分区间建立数组索引,从而快速缩小向量查找范围,加快查找规则库速度; |
Secondly, for thepurpose of accelerate rule set searching, IABV divides each dimension into multiple intervals equally andemploys array to index these intervals; |
18507 |
最后,将规则中前缀转化为范围降低辅助查找结构复杂度,以减少内存空间占用量并加快规则查找速度。 |
Finally, the prefix in the rule is converted into range to reduce the complexity of the search structure, so that the time and memory consumption of the algorithm can be decreased. |
18508 |
实验结果表明,将规则中前缀转化为范围后能够有效提升算法性能,相同条件下IABV算法相比ABV算法时间性能有显著提高。 |
The experiment result shows that the performance of the algorithm can be improved by convertingprefix into range and the time performance of IABV algorithm is significantly improved compared with theABV algorithm under the same conditions. |
18509 |
针对面向混合能源供应的 5G 异构云无线接入网(H-CRANs)网络架构下的动态资源分配和能源管理问题,该文提出一种基于深度强化学习的动态网络资源分配及能源管理算法。 |
Considering the dynamic resource allocation and energy management problem in the 5G Heterogeneous Cloud Radio Access Networks(H-CRANs) architecture for hybrid energy supply, a dynamic network resource allocation and energy management algorithm based on deep reinforcement learning is proposed. |
18510 |
首先,由于可再生能源到达的波动性及用户数据业务到达的随机性,同时考虑到系统的稳定性、能源的可持续性以及用户的服务质量(QoS)需求,将H-CRANs网络下的资源分配以及能源管理问题建立为一个以最大化服务提供商平均净收益为目标的受限无穷时间马尔科夫决策过程(CMDP)。 |
Firstly, due to the volatility of renewable energy and the randomness of user data service arrival,taking into account the stability of the system, the sustainability of energy and the Quality of Service(QoS)requirements of users, the resource allocation and energy management issues in the H-CRANs network as aConstrained infinite time Markov Decision Process (CMDP) are modeled with the goal of maximizing theaverage net profit of service providers. |
18511 |
然后,使用拉格朗日乘子法将所提CMDP问题转换为一个非受限的马尔科夫决策过程(MDP)问题。 |
Then, the Lagrange multiplier method is used to transform the proposedCMDP problem into an unconstrained Markov Decision Process (MDP) problem. |
18512 |
最后,因为行为空间与状态空间都是连续值集合,因此该文利用深度强化学习解决上述MDP问题。 |
Finally, because the action space and the state space are both continuous value sets, the deep reinforcement learning is used to solve the above MDP problem. |
18513 |
仿真结果表明,该文所提算法可有效保证用户QoS及能量可持续性的同时,提升了服务提供商的平均净收益,降低了能耗。 |
The simulation results show that the proposed algorithm can effectively guarantee theQoS and energy sustainability of the system, while improving the average net income of the service providerand reducing energy consumption. |
18514 |
针对异构云无线接入网络(H-CRAN)网络下基于网络切片的在线无线资源动态优化问题,该文通过综合考虑业务接入控制、拥塞控制、资源分配和复用,建立一个以最大化网络平均和吞吐量为目标,受限于基站(BS)发射功率、系统稳定性、不同切片的服务质量(QoS)需求和资源分配等约束的随机优化模型, |
For online dynamic radio resources optimization for network slices in Heterogeneous Cloud Raido AccessNetwork (H-CRAN), by comprehensively considering traffic admission control, congestion control, resourceallocation and reuse, the problem is formulated as a stochastic optimization programming which maximizesnetwork average total throughput subject to Base Station (BS) transmit power, system stability, Quality of Service(QoS) requirements of different slices and resource allocation constraints. |